I am trying to implement a rollback strategy for deployments using a CI pipeline. My goal is to rollback to a specific image version rather than using the generic black-box revision number of kubernetes rolloutsubcommand. Unfortunately, Kubernetes default rollback flags only provide --to-revision flag to select a revision number which doesn't give you any information about what image that revision is using.
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
Is there a way to know what image and tag that revision using? The REVISION numbers on doesn't tell much and will mean that I've to keep track in whatever way to know what image version is in each revision number. Overriding CHANGE-CAUSE to add more informatin in the annotations of the deployment I understand is not advisable
REVISION CHANGE-CAUSE
1 kubectl create -f deployment.yaml --record
2 kubectl set image deployment/httpd-deployment httpd=httpd:1.9.1
3 kubectl set image deployment/htppd-deployment httpd=nginx:1.91
I am looking for something like this in my CI pipeline with --to-image sort of flag
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2 --to-image=httpd:1.7
Knowing that --to-image flag doesn't exist, will that mean me just totally abandoning the rollout command and just redeploying with a previous image like below. (I'm thinking this is not actually harnessing rollout capability)
$ kubectl set image deployment/httpd-deployment httpd=httpd:1.9.1
Any better way to implement Kubernetes Rollback to a previous specific deployment image?
Related
I was doing a practice exam on the website killer.sh , and ran into a question I feel I did a hacky solution to. Given a deployment that has had a bad rollout, revert to the last revision that didn't have any issues. If I check a deployment's rollout history, for example with the command:
kubectl rollout history deployment mydep
I get a small table with version numbers, and "change-cause" commands. Is there any way to check the changes made to the deployment's yaml file for a specific revision? Because I was stumped in figuring out which specific revision didn't have the error inside of it.
Behind the scenes a Deployment creates a ReplicaSet that has its metadata.generation set to the REVISION you see in kubectl rollout history deployment mydep, so you can look at and diff old ReplicaSets associated with the Deployment.
On the other hand, being an eventually-consistent system, kubernetes has no notion of "good" or "bad" state, so it can't know what was the last successful deployment, for example; that's why deployment tools like helm, kapp etc. exist.
Kubernetes does not store more than what is necessary for it to operate and most of the time that is just the desired state because kubernetes is not a version control system.
This is preciously why you need to have a version control system coupled with tools like helm or kustomize where you store the deployment yamls and apply them to the cluster with a new version of the software. This helps in going back in history to dig out details when things break.
You can record the last executed command that changed the deployment with --record option. When using --record you would see the last executed command executed(change-cause) to the deployment metadata.annotations. You will not see this in your local yaml file but when you try to export the deployment as yaml then you will notice the change.
--record option like below
kubectl create deployment <deployment name> --image=<someimage> > testdelpoyment.yaml
kubectl create -f testdeployment.yaml --record
or
kubectl set image deployment/<deploymentname> imagename=newimagename:newversion --record
I tried to find useful information when should i use --record. I created 3 commands:
k set image deployment web1 nginx=lfccncf/nginx:latest --record
k rollout undo deployment/web1 --record
k -n kdpd00202 edit deployment web1 --record
Could anyone tell me if I need to use --record in each of these 3 commands?
When is it necessary to use --record and when is it useless?
Kubernetes desired state can be updated/mutated thru two paradigms :
Either imperatively using kubectl adhoc commands ( k set, k create, k run, k rollout ,..)
Or declaratively using YAML manifests with a single k apply
The declarative way is ideal for treating your k8s manifests as Code, then you can share this Code with the team, version it thru Git for example, and keep tracking its history leveraging GitOps practices ( branching models, Code Review, CI/CD ).
However, the imperative way cannot be reviewed by the team as these adhoc-commands will be run by an individual and no one else can easily find out the cause of the change after the change has been made.
To overcome the absence of an audit trail with imperative commands, the --record option is there to bind the root cause of the change as annotation called kubernetes.io/change-cause and the value of this annotation is the imperative command itself.
(note below is from the official doc)
Note: You can specify the --record flag to write the command executed in the resource annotation kubernetes.io/change-cause. The recorded change is useful for future introspection. For example, to see the commands executed in each Deployment revision.
As conclusion :
Theoretically ,--record is not mandatory
Practically, it's mandatory in order to ensure the changes leave a rudimentary audit trail behind and comply with SRE process and DevOps culture.
You can specify the --record flag to write the command executed in the resource annotation kubernetes.io/change-cause. The recorded change is useful for future introspection. For example, to see the commands executed in each Deployment revision.
kubectl rollout history deployment.v1.apps/nginx-deployment
The output is similar to this:
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true
2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true
3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 --record=true
So it's not mandatory for any of the commands and but is recommended for kubectl set image because you will not see anything in CHANGE-CAUSE section as above if you skip --record
--record flag also helps to see the details of the revision history, so rollback to a previous version also would be smoother.
When you don't append --record flag Change-Cause table will be just <none> in
kubectl rollout history
$ kubectl rollout history deployment/app
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>
I'm new to k8s. I want to specify my own change cause with rollouts. I know I can use the --record:
kubectl set image deployment/tomcat-deployment tomcat=tomcat:9.0.1 --record
But I'd like to specify my own change cause (e.g. "Update to Tomcat 9.0.1"
I tried this:
kubectl annotate deployment tomcat-deployment kubernetes.io/change-cause='Tomcat9.0.1'
but it changes the change cause the whole kubectl annotate command above
is there a way to do this?
Thanks
Mark
There is no K8s tool to help you do this. If you want to add annotations to keep the track of what you are doing to can do it through patches as follows:
kubectl patch RESOURCE RESOURCE_NAME --patch '{"metadata": {"annotations": {"my-annotation-key": "my-annotation-value"}}}'
So, if you would want to add an annotation to a deployment, you would do:
kubectl patch deployment tomcat-deployment --patch '{"metadata": {"annotations": {"tomcat-deployment kubernetes.io/change-cause": "Tomcat9.0.1"}}}'
Now, I don't think this is a good approach. I personally would never do this. The best way may be would be to have a CI/CD implemented (jenkins, ansible) and keep the track through commits.
So in order to update the images running on a pod, I have to modify the deployment config (yaml file), and run something like kubectl apply -f deploy.yaml.
This means, if I'm not editing the yaml file manually I'll have to use some template / search and replace functionality. Which isn't really ideal.
Are there any better approaches?
It seems there is a kubectl rolling-update command, but I'm not sure if this works for 'deployments'.
For example running the following: kubectl rolling-update wordpress --image=eu.gcr.io/abcxyz/wordpress:deploy-1502443760
Produces an error of:
error: couldn't find a replication controller with source id == default/wordpress
I am using this for changing images in Deployments:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
If you view the yaml files as source of truth then use a tag like stable in the yaml and only issue kubectl set image commands when the tag is moved (use the sha256 image id to actually trigger a rollout; the image names are matched like a string so updating from :stable to :stable is a noop even if the tag now points to a different image).
See updating a deployment for more details.
The above requires the deployment replica count to be set more then 1, which is explained here: https://stackoverflow.com/a/45649024/1663462.
Is there any way for me to replicate the behavior I get on cloud.docker where a service can be redeployed either manually with the latest image or automatically when the repository image is updated?
Right now I'm doing something like this manually in a shell script with my controller and service files:
kubectl delete -f ./ticketing-controller.yaml || true
kubectl delete -f ./ticketing-service.yaml || true
kubectl create -f ./ticketing-controller.yaml
kubectl create -f ./ticketing-service.yaml
Even that seems a bit heavy handed, but works fine. I'm really missing the autoredeploy feature I have on cloud.docker.
Deleting the controller yaml file itself won't delete the actual controller in kubernetes unless you have a special configuration to do so. If you have more than 1 instance running, deleting the controller probably isn't what you would want because it would delete all the instances of your running application. What you really want to do is perform a rolling update of your application that incrementally replaces containers running the old image with containers running the new one.
You can do this manually by:
For a Deployment controller update the yaml file image and execute kubectl apply.
For a ReplicationController update the yaml file and execute kubectl rollingupdate. See: http://kubernetes.io/docs/user-guide/rolling-updates/
With v1.3 you will be able to use kubectl set image
Alternatively you could use a PaaS to automatically push the image when it is updated in the repo. Here is an incomplete list of a few Paas options:
Red Hat OpenShift
Spinnaker
Deis Workflow
According to Kubernetes documentation:
Let’s say you were running version 1.7.9 of nginx:
$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
deployment "my-nginx" created
To update to version 1.9.1, simply change
.spec.template.spec.containers[0].image from nginx:1.7.9 to
nginx:1.9.1, with the kubectl commands.
$ kubectl edit deployment/my-nginx
That’s it! The Deployment will declaratively update the deployed nginx
application progressively behind the scene. It ensures that only a
certain number of old replicas may be down while they are being
updated, and only a certain number of new replicas may be created
above the desired number of pods.