Kubernetes - Update deployment with custom change cause - kubernetes

I'm new to k8s. I want to specify my own change cause with rollouts. I know I can use the --record:
kubectl set image deployment/tomcat-deployment tomcat=tomcat:9.0.1 --record
But I'd like to specify my own change cause (e.g. "Update to Tomcat 9.0.1"
I tried this:
kubectl annotate deployment tomcat-deployment kubernetes.io/change-cause='Tomcat9.0.1'
but it changes the change cause the whole kubectl annotate command above
is there a way to do this?
Thanks
Mark

There is no K8s tool to help you do this. If you want to add annotations to keep the track of what you are doing to can do it through patches as follows:
kubectl patch RESOURCE RESOURCE_NAME --patch '{"metadata": {"annotations": {"my-annotation-key": "my-annotation-value"}}}'
So, if you would want to add an annotation to a deployment, you would do:
kubectl patch deployment tomcat-deployment --patch '{"metadata": {"annotations": {"tomcat-deployment kubernetes.io/change-cause": "Tomcat9.0.1"}}}'
Now, I don't think this is a good approach. I personally would never do this. The best way may be would be to have a CI/CD implemented (jenkins, ansible) and keep the track through commits.

Related

Can I see a rollout in more detail?

I was doing a practice exam on the website killer.sh , and ran into a question I feel I did a hacky solution to. Given a deployment that has had a bad rollout, revert to the last revision that didn't have any issues. If I check a deployment's rollout history, for example with the command:
kubectl rollout history deployment mydep
I get a small table with version numbers, and "change-cause" commands. Is there any way to check the changes made to the deployment's yaml file for a specific revision? Because I was stumped in figuring out which specific revision didn't have the error inside of it.
Behind the scenes a Deployment creates a ReplicaSet that has its metadata.generation set to the REVISION you see in kubectl rollout history deployment mydep, so you can look at and diff old ReplicaSets associated with the Deployment.
On the other hand, being an eventually-consistent system, kubernetes has no notion of "good" or "bad" state, so it can't know what was the last successful deployment, for example; that's why deployment tools like helm, kapp etc. exist.
Kubernetes does not store more than what is necessary for it to operate and most of the time that is just the desired state because kubernetes is not a version control system.
This is preciously why you need to have a version control system coupled with tools like helm or kustomize where you store the deployment yamls and apply them to the cluster with a new version of the software. This helps in going back in history to dig out details when things break.
You can record the last executed command that changed the deployment with --record option. When using --record you would see the last executed command executed(change-cause) to the deployment metadata.annotations. You will not see this in your local yaml file but when you try to export the deployment as yaml then you will notice the change.
--record option like below
kubectl create deployment <deployment name> --image=<someimage> > testdelpoyment.yaml
kubectl create -f testdeployment.yaml --record
or
kubectl set image deployment/<deploymentname> imagename=newimagename:newversion --record

kubernetes gcp caching old image

I'm running GKE cluster and there is a deployment that uses image which I push to Container Registry on GCP, issue is - even though I build the image and push it with latest tag, the deployment keeps on creating new pods with the old one cached - is there a way to update it without re-deploying (aka without destroying it first)?
There is a known issue with the kubernetes that even if you change configmaps the old config remains and you can either redeploy or workaround with
kubectl patch deployment $deployment -n $ns -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
is there something similar with cached images?
I think you're looking for kubectl set or patch which I found there in kubernetes documentation.
To update image of deployment you can use kubectl set
kubectl set image deployment/name_of_deployment name_of_deployment=image:name_of_image
To update image of your pod you can use kubectl patch
kubectl patch pod name_of_pod -p '{"spec":{"containers":[{"name":"name_of_pod_from_yaml","image":"name_of_image"}]}}'
You can always use kubectl edit to edit which allows you to directly edit any API resource you can retrieve via the command line tool.
kubectl edit deployment name_of_deployment
Let me know if you have any more questions.
1) You should change the way of your thinking. Destroying pod is not bad. Application downtime is what is bad. You should always plan your deployments in such a way that it can tolerate one pod death. Use multiple replicas for stateless apps and use clusters for stateful apps. Use Kubernetes rolling update for any changes to your deployments. Rolling updates have many extremely important settings which directly influence the uptime of your apps. Read it carefully.
2) The reason why Kubernetes launches old image is that by default it uses
imagePullPolicy: IfNotPresent. Use imagePullPolicy: Always and it will always try to pull latest version on redeploy.

Kubernetes - validate deployments

I have a namespace namespace - which has ~10-15 deployments.
Creating a big yaml file, and apply it on a "deploy".
How do i validate, wait, watch, block, until all deployments have been rolledout ?
currently i am thinking of:
get list of deployments
foreach deployment - make api call to get status
once all deployments are "green" - end process, signaling deployment/ship is done.
what are the status'es of deployments, is there already a similar tool that can do it? https://github.com/Shopify/kubernetes-deploy is kind of what i am searching for, but it forces a yml structure and so on.
what would be the best approach?
Set a readiness probe and use kubectl rollout status deployment <deployment_name> to see the deployment rollout status
You'd better use Helm for managing deployments. Helm allows you to create reusable templates that can be applied to more than one environment. Read more here: https://helm.sh/docs/chart_template_guide/#getting-started-with-a-chart-template
You can create one big chart for all your services or you can create separate Helm charts for each your service.
Helm also allows you to run tests after deployment is done. Read more here: https://helm.sh/docs/developing_charts/#a-breakdown-of-the-helm-test-hooks
You probably want to use kubectl wait https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait
It lets you wait for a specific condition of a specific object
In your case:
kubectl -n namespace \
wait --for=condition=Available --timeout=32s \
deployment/name
use --dry-run option in the apply/create command to check the syntax.

Is there a way to rollout a deployment by modify Kubernetes Object Deployment?

I can not find a way to rollout my deployment directly like kubectl rollout undo xxx in a third-party k8s-client called fabric, but the client offers the way to modify k8s objects.
so I'm wondering if there is a way to rollout a deployment by modify Kubernetes Object Deployment?
There is no direct way to do something like that in the fabric8 kubernetes-client.
So you will have to simulate the kubectl rollout command using the client to modify the Deployment object (pretty much as you described).
This is however a really neat feature and I am sure that it wouldn't be that hard to add to the client.
I can see that you've already raised an issue about it, so we can take it from there.

Need to change pod definition before rolling update

Kubernetes newbie question: Can I somehow include my pod definition inside my deployment definition?
I currently have a pod.yml, a service.yml and a deployment.yml file. Both in pod.yml and deployment.yml I specify my docker image and tag: user/image:v1.
To do a rolling update, I tried doing kubectl set image deployment/api api=user/image:v2
However, that doesnt work alone.. It seems to conflict with the image tag in the pod definition. I need to also update the pod with tag v2 for kubectl set image to work.. I feel like I'm doing something wrong. Thoughts?
Yes, you can include all definitions in one file. Have a look at the guestbook-all-in-one.yaml example.
The recommended way to do a rolling update is to change the file and then use apply:
$ vim guestbook-all-in-one.yaml # make the desired changes
$ kubectl apply -f guestbook-all-in-one.yaml # apply these changes
If possible, you should also have this file under version control, so that the file with the current status is always easily accessible.