Can I purge history revision for a deployment in kubernetes? - kubernetes

I would like to control the number of revision in the resultset of this command in my k8s cluster:
kubectl rollout history deployment.v1.apps/<<my_deployment>>
Here it is what I have:
REVISION CHANGE-CAUSE
10 set app version to 1.1.10
11 set app version to 1.1.11
12 set app version to 1.1.12
13 set app version to 1.1.13
14 set app version to 1.1.14
15 set app version to 1.1.15
16 set app version to 1.1.16
17 set app version to 1.1.17
18 set app version to 1.1.18
19 set app version to 1.1.19
20 set app version to 1.1.20
21 set app version to 1.1.21
I would like to have only:
21 set app version to 1.1.21
Is there a magical command like:
kubectl rollout history clean deployment.v1.apps/<<my_deployment>>

Yes, as per the documentation, it can be done by setting .spec.revisionHistoryLimit in your Deployment to 0:
Clean up Policy
You can set .spec.revisionHistoryLimit field in a Deployment to
specify how many old ReplicaSets for this Deployment you want to
retain. The rest will be garbage-collected in the background. By
default, it is 10.
Note: Explicitly setting this field to 0, will result in cleaning up all the history of your Deployment thus that Deployment
will not be able to roll back.
The easiest way to do it is by patching your Deployment. It can be done by executing the following command:
kubectl patch deployment nginx-deployment --type json -p '[{ "op": "replace", "path": "/spec/revisionHistoryLimit","value": 0}]'
Then you can set it back to the previous value:
kubectl patch deployment nginx-deployment --type json -p '[{ "op": "replace", "path": "/spec/revisionHistoryLimit","value": 10}]'
UPDATE:
Thx, I have already try this. The history revision table is still
present. The only way I have found is to delete the deployment
configuration. – Sunitrams 20 mins ago
Are you sure you did it the same way ? 🤔 Take a quick look and see how it works in my case:
$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>
4 <none>
After running:
$ kubectl patch deployment nginx-deployment --type json -p '[{ "op": "replace", "path": "/spec/revisionHistoryLimit","value": 0}]'
deployment.apps/nginx-deployment patched
the revision history is reduced to the latest one:
$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
4 <none>
When I set .spec.revisionHistoryLimit back to 10:
$ kubectl patch deployment nginx-deployment --type json -p '[{ "op": "replace", "path": "/spec/revisionHistoryLimit","value": 10}]'
deployment.apps/nginx-deployment patched
there is still only latest revision:
$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
4 <none>

Related

Why the k8s rollback (rollout undo) is not working?

After a successful
kubectl rollout restart deployment/foo
the
kubectl rollout undo deployment/foo
or
kubectl rollout undo deployment/foo --to-revision=x
are not having effect. I mean, the pods are replaced by new ones and a new revision is created which can be checked with
kubectl rollout history deployment foo
but when I call the service, the rollback had no effect.
I also tried to remove the imagePullPolicy: Always, guessing that it was always pulling even in the rollback, with no success because probably one thing is not related to the other.
Edited: The test is simple, I change the health check route of the http api to return something different in the json, and it doesn't.
Edited:
Maybe a typo, but not: I was executing with ... undo deployment/foo ..., and now tried with ... undo deployment foo .... It also gives me deployment.apps/foo rolled back, but no changes in the live system.
More tests: I changed again my api route to test what would happen if I executed a rollout undo to every previous revision one by one. I applied the last 10 revisions, and nothing.
To be able to rollback to a previous version don't forget to append the --record parameter to your kubectl command, for example:
kubectl apply -f DEPLOYMENT.yaml --record
Then you should be able to see the history as you know with:
kubectl rollout history deployment DEPLOYMENT_NAME
And your rollback will work properly
kubectl rollout undo deployment DEPLOYMENT_NAME --to-revision=CHOOSEN_REVISION_NUMBER
Little example:
consider my nginx deployment manifest "nginx-test.yaml" here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
lets create it:
❯ kubectl apply -f nginx-test.yaml --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx-deployment created
lets check the image of this deployment, as expected from the manifest:
❯ k get pod nginx-deployment-74d589986c-k9whj -o yaml | grep image:
- image: nginx
image: docker.io/library/nginx:latest
now lets modify the image of this deployment to "nginx:1.21":
#"nginx=" correspond to the name of the container inside the pod create by the deployment.
❯ kubectl set image deploy nginx-deployment nginx=nginx:1.21.6
deployment.apps/nginx-deployment image updated
we can optionnaly check the rollout status:
❯ kubectl rollout status deployment nginx-deployment
deployment "nginx-deployment" successfully rolled out
we can check the rollout history with:
❯ kubectl rollout history deploy nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 kubectl apply --filename=nginx-test.yaml --record=true
2 kubectl apply --filename=nginx-test.yaml --record=true
lets check the image of this deployment, as expected:
❯ k get pod nginx-deployment-66dcfc79b5-4pk7w -o yaml | grep image:
- image: nginx:1.21.6
image: docker.io/library/nginx:1.21.6
Oh, no, i don't like this image ! Lets rollback:
❯ kubectl rollout undo deployment nginx-deployment --to-revision=1
deployment.apps/nginx-deployment rolled back
creating:
> kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-deployment-66dcfc79b5-4pk7w 1/1 Running 0 3m41s 10.244.3.4 so-cluster-1-worker3 <none> <none>
pod/nginx-deployment-74d589986c-m2htr 0/1 ContainerCreating 0 13s <none> so-cluster-1-worker2 <none> <none>
after few seconds:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-deployment-74d589986c-m2htr 1/1 Running 0 23s 10.244.4.10 so-cluster-1-worker2 <none> <none>
as you can see it worked:
❯ k get pod nginx-deployment-74d589986c-m2htr -o yaml | grep image:
- image: nginx
image: docker.io/library/nginx:latest
lets recheck the history:
❯ kubectl rollout history deploy nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 kubectl apply --filename=nginx-test.yaml --record=true
2 kubectl apply --filename=nginx-test.yaml --record=true
you can change the rollout history's CHANGE-CAUSE with the "kubernetes.io/change-cause" annotation:
❯ kubectl annotate deploy nginx-deployment kubernetes.io/change-cause="update image from 1.21.6 to latest" --reco
rd
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx-deployment annotated
lets recheck the history:
❯ kubectl rollout history deploy nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
2 kubectl apply --filename=nginx-test.yaml --record=true
3 update image from 1.21.6 to latest
lets describe the deployment:
❯ kubectl describe deploy nginx-deploy
Name: nginx-deployment
Namespace: so-tests
CreationTimestamp: Fri, 06 May 2022 00:56:09 -0300
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision: 3
kubernetes.io/change-cause: update image from latest to latest
...
hope this has helped you, bguess.
I experienced a similar situation and it was more an own mistake rather than a configuration issue with my K8s manifests.
At my Docker image creation workflow, I forgot to update the version of my Docker image. I have a GitHub Action workflow that pushes the image to DockerHub. Not updating the image version, overwrote the current image with latest changes in the application.
Then my kubectl rollout undo command was pulling the correct image but that image had the most recent changes in the application. In other words, image 1.1 was the same as 1.0. Running undo had no effect on the application state.
Stupid mistake but that was my experience in case helps someone.

How can i use the io.kubernetes official java client to do a rollout restart? [duplicate]

Kubernetes 1.15 introduced the command
kubectl rollout restart deployment my-deployment
Which would be the endpoint to call through the API?
For example if I want to scale a deployment I can call
PATCH /apis/apps/v1/namespaces/my-namespace/deployments/my-deployment/scale
If you dig around in the kubectl source you can eventually find (k8s.io/kubectl/pkg/polymorphichelpers).defaultObjectRestarter. All that does is change an annotation:
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: '2006-01-02T15:04:05Z07:00'
Anything that changes a property of the embedded pod spec in the deployment object will cause a restart; there isn't a specific API call to do it.
The useful corollary to this is that, if your kubectl and cluster versions aren't in sync, you can use kubectl rollout restart in kubectl 1.14 against older clusters, since it doesn't actually depend on any changes in the Kubernetes API.
TLDR
curl --location --request PATCH 'https://kubernetes.docker.internal:6443/apis/apps/v1/namespaces/default/deployments/keycloak?fieldManager=kubectl-rollout&pretty=true' \
--header 'Content-Type: application/strategic-merge-patch+json' \
--data-raw '{
"spec": {
"template": {
"metadata": {
"annotations": {
"kubectl.kubernetes.io/restartedAt": <time.Now()>
}
}
}
}
}'
If you have kubectl you can debug the calls on a local minikube by providing the extra flag --v 9 to your command.
That said you can try to do a dummy rollout restart on your local cluster to see the results.
For future readers: This can vary between versions, but if you are in apps/v1 it should be ok.

Where does Helm store installation state?

When you run a helm install command, Helm outputs information like the revision of this installation.
Where does Helm store this information? (I assume it's in the cluster somewhere.)
Depends on configuration
I found the answer in the docs.
Helm 3 changed the default release information storage to Secrets in the namespace of the release.
https://helm.sh/docs/topics/advanced/#storage-backends
It goes on to say that you can configure it to instead store that state in a ConfigMap or in a PostgreSQL database.
So by default, kubectl get secret --namespace my-namespace will include an entry like
sh.helm.release.v1.st.v1 helm.sh/release.v1 1 13m
And kubectl describe secret sh.helm.release.v1.st.v1 will output something like
Name: sh.helm.release.v1.st.v1
Namespace: my-namespace
Labels: modifiedAt=1613580504
name=st
owner=helm
status=deployed
version=1
Annotations: <none>
Type: helm.sh/release.v1
The storage is changed in Helm 3 as follows:
Releases are stored as Secrets by default (it could use PostgreSQL).
Storage is in the namespace of the release.
Naming is changed to sh.helm.release.v1.<release_name>.v<revision_version>.
The Secret type is set as helm.sh/release.v1.
List installed helm Charts:
$ helm ls --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
chrt-foobar default 2 2019-10-14 15:18:31.529138228 +0100 IST deployed chrt-foobar-0.1.0 1.16.0
chrt-test test 1 2019-10-14 15:20:28.196338611 +0100 IST deployed chrt-test-0.1.0 1.16.0
List helm releases history
$ kubectl get secret -l "owner=helm" --all-namespaces
NAMESPACE NAME TYPE DATA AGE
default sh.helm.release.v1.chrt-foobar.v1 helm.sh/release.v1 1 3m2s
default sh.helm.release.v1.chrt-foobar.v2 helm.sh/release.v1 1 2m40s
test sh.helm.release.v1.chrt-test.v1 helm.sh/release.v1 1 43s
There are two parts to Helm in Helm2: The Helm client (helm) and the Helm server (Tiller) (removed in Helm3).
When we run helm init it install the Tiller part on Kubernetes cluster. You can confirm the installation
kubectl get pods --namespace kube-system
#see Tiller running.
Where does Helm store this information? (I assume it's in the cluster somewhere.)
As for
By default, tiller stores release information in ConfigMaps in the namespace where it is running, the new version also supports SQL storage backend for release information.
storage-backends
To get release information
kubectl get configmap -n kube-system -l "OWNER=TILLER"
then check the release info from config map
kubectl get configmap -n kube-system -o yaml myapp.v2:
how-helm-uses-configmaps-to-store-data

Deleting deployment leaves trailing replicasets and pods

I am running Kubernetes in GCP and since updating few months ago (now I am running 1.17.13-gke.2600) I am observing trailing replicasets and pods after deployment deletion. Consider state before deletion:
$ k get deployment | grep parser
parser-devel 1/1 1 1 38d
$ k get replicaset | grep parser
parser-devel-66bfc86ddb 0 0 0 27m
parser-devel-77898d9b9d 1 1 1 5m49s
$ k get pod | grep parser
parser-devel-77898d9b9d-4w48w 1/1 Running 0 6m2s
Then I delete the deployment:
$ k delete deployment parser-devel
deployment.apps "parser-devel" deleted
$ k get replicaset | grep parser
parser-devel-66bfc86ddb 0 0 0 28m
parser-devel-77898d9b9d 1 1 1 7m1s
$ k get pod | grep parser
parser-devel-77898d9b9d-4w48w 1/1 Running 0 7m6s
Then I try to delete the replicasets:
$ k delete replicaset parser-devel-66bfc86ddb parser-devel-77898d9b9d
replicaset.apps "parser-devel-66bfc86ddb" deleted
replicaset.apps "parser-devel-77898d9b9d" deleted
$ k get pod | grep parser
parser-devel-77898d9b9d-4w48w 1/1 Running 0 8m14s
As far as I understand Kubernetes, this is not a correct behaviour, so why it is happening?
How about check ownerReference of the ReplicaSet created by you Deployment ? Refer Owners and dependents for more details. For example, for removing dependencies of the Deployment, the Deployment name and uid should be matched exactly in the ownerReference ones. Or I have experience that similar issue happened, if Kuerbetenes API was somthing wrong. So API service restart may help to resovle it.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
...
ownerReferences:
- apiVersion: apps/v1
controller: true
blockOwnerDeletion: true
kind: Deployment
name: your-deployment
uid: xxx
The trailing ReplicaSets that you can see after deployment deletion depends of the Revision History Limit that you have in your Deployment.
.spec.revisionHistoryLimit is an optional field that specifies the number of old ReplicaSets to retain to allow rollback.
By default, 10 old ReplicaSets will be kept.
You could see the number of ReplicaSets with the following command:
kubectl get deployment DEPLOYMENT -o yaml | grep revisionHistoryLimit
But you can modify this value with:
kubectl edit deployment DEPLOYMENT
Edit 1
I created a GKE cluster on the same version (1.17.13-gke.2600) in order to know if it is deleting trailing resources when I delete the parent object (Deployment).
With testing purposes, I created an nginx Deployment and then deleted it with kubectl delete Deployment DEPLOYMENT_NAME, Deployment and all its dependents (Pods created and Replicasets) were deleted.
Then I tested it again, but this time by adding kubectl flag --cascade=false like kubectl delete Deployment DEPLOYMENT_NAME --cascade=false and all the dependent resources remained but the Deployment. With this situation (leaving orphaned resources) kube-controller manager (Specifically garbage collector) should delete these resources soon or later.
With the tests I made, seems that GKE version is OK as I was able to delete the trailed resources made by the Deployment object I created since my first test.
Cascade option is set by default as true with different and several command verbs like delete, you also could check this other documentation. Even so, I would like to know if you can create a Deployment, and then try to delete it with command kubectl delete Deployment DEPLOYMENT_NAME --cascade=true in order to know if by trying to force cascade deletion helps on this case.

Adding --record=true on a deployment file - Kubernetes

I'm new to Kubernetes and I wanted to know if there is there a way I can add '--record=true' inside the deployment yaml file, so I do not have to type it on the command line!
I know it goes like this: kubectl apply -f deployfile.yml --record
I am asking this because we work on a team, and not everyone is using --record=true at the end of the command when deploying files to kubernetes!
Thank you in advance,
As far as I'm aware there is no feature like --record=true flag in kubectl that you can add to Manifest.
The command which was used to start the Deployment is being stored in the kubernetes.io/change-cause annotation. This is being used for Rollout history which is described here.
First, check the revisions of this Deployment:
kubectl rollout history deployment.v1.apps/nginx-deployment
The output is similar to this:
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true
2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true
CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify the CHANGE-CAUSE message by:
Annotating the Deployment with kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"
Append the --record flag to save the kubectl command that is making changes to the resource.
Manually editing the manifest of the resource.
To see the details of each revision, run:
kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2
The output is similar to this:
deployments "nginx-deployment" revision 2
Labels: app=nginx
pod-template-hash=1159050644
Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables: <none>
No volumes.
For the command history I would use $ history or check user bash_history
$ tail /home/username/.bash_history
Create an alias in your bashrc or zshrc as below
alias kubectl=kubectl --record and then do kubectl apply -f deployfile.yml
or
alias kr=kubectl --record and kr apply -f deployfile.yml