Is it possible to find out resource update time from kubemaster? - kubernetes

We can see updates to deployment using command:
kubectl rollout history deploy/<name>
We can also see updated config using:
kubectl rollout history --revision=<revision-#> deploy/<name>
I'm not sure how to find out given revision's update time. Is it possible to find it?

If you are storing events from the namespace or api server logs, you might be able to find out. One crude way will be to look at creation time for replica sets of the deployment - kubectl get replicaset

Related

How to get last updated date for k8s hpa?

I tried with the below command but it provides only the created date. How do I get when was it last updated?
kubectl describe hpa -n xyz hpa-hello
Example: Today I create a hpa with max replica 3 and tomorrow I apply the same yaml but with max replica 6. If I do that, I can see only the created date and not the updated date. The description correctly shows the updated max replicas as 6
Update: There is no direct way to obtain this information !
I Assume here that the last updated date or time nothing but when was last auto scaled?
To get details about the Horizontal Pod Autoscaler, you can use kubectl get hpa with the -o yaml flag. The status field contains information about the current number of replicas and any recent autoscaling events.
kubectl get hpa <hpa name> -o yaml
In this output we can see the last transition times along with messages.
Refer to this doc which might help you and also to view the details about the HPA.
Edit 1:
Moreover, I have gone through many docs and I don't think we can get the info about it from a get command. For this, an auditing resource in k8's can be created to log all activities generated by users for a specific type of resource . This might help you to find the logs.

How to check if k8s deployment is passed/failed manually?

I'm trying to understand what does kubectl rollout status <deployment name> do.
I'm using k8s-node-api, and from this thread (https://github.com/kubernetes-client/javascript/issues/536), the maintainer suggest using k8s-watch api to watch for changes in the deployment, but I'm not sure what to check.
Questions:
How to make sure the new deployment succeed?
How to make the the new deployment failed?
Is it safe to assume that if the spec/containers/0/image changes to something different than what I'm expecting, it means there is a new deployment and I should stop watching?
My questions are probably ambiguous because I'm new to k8s.
Any guidance will be great!
I can't use Kubectl - I'm writing a code that does that based on what kubectl does.
As we have discussed in comment section I have mentioned that to check any object and processes in Kubernetes you have to use kubectl - see: kubernetes.io/docs/reference/kubectl/overview.
Take a look how to execute proper command to gain required information - kubectl-rollout.
If you want to check how rollout process looks from backgroud look at the source code - src-code-rollout-kubernetes.
Pay attention on that if you are using node-api:
The node-api group was migrated to a built-in API in the > k8s.io/api repo with the v1.14
release. This repo is no longer maintained, and no longer synced
with core kubernetes as of the v1.18 release.
I often use 2 following command for check out deployment status
kubectl describe deployment <your-deployment-name>
kubectl get deployment <your-deployment-name> -oyaml
The first will show you some events about process of schedule a deployment.
The second is more detailed. It contains all of your deployment's resource info as yaml format.
Is that enough for your need ?
After digging through k8s source code, I was able to implement this logic by my self in Node.js:
How to make sure the new deployment succeed?
How to make the the new deployment failed?
https://github.com/stavalfi/era-ci/blob/master/packages/steps/src/k8s/utils.ts#L387
Basically, I'm subscribing to events about a specific deplyoment (AFTER chancing something in it, for example, the image).
Is it safe to assume that if the spec/containers/0/image changes to something different than what I'm expecting, it means there is a new deployment and I should stop watching?
Yes. But https://github.com/stavalfi/era-ci/blob/master/packages/steps/src/k8s/utils.ts#L62 will help also to idenfity that there is a new deployment going on and the yours is no longer the "latest-deployment".
For More Info
I wrote an answer about how deployment works under the hood: https://stackoverflow.com/a/66092577/806963

Can I see a rollout in more detail?

I was doing a practice exam on the website killer.sh , and ran into a question I feel I did a hacky solution to. Given a deployment that has had a bad rollout, revert to the last revision that didn't have any issues. If I check a deployment's rollout history, for example with the command:
kubectl rollout history deployment mydep
I get a small table with version numbers, and "change-cause" commands. Is there any way to check the changes made to the deployment's yaml file for a specific revision? Because I was stumped in figuring out which specific revision didn't have the error inside of it.
Behind the scenes a Deployment creates a ReplicaSet that has its metadata.generation set to the REVISION you see in kubectl rollout history deployment mydep, so you can look at and diff old ReplicaSets associated with the Deployment.
On the other hand, being an eventually-consistent system, kubernetes has no notion of "good" or "bad" state, so it can't know what was the last successful deployment, for example; that's why deployment tools like helm, kapp etc. exist.
Kubernetes does not store more than what is necessary for it to operate and most of the time that is just the desired state because kubernetes is not a version control system.
This is preciously why you need to have a version control system coupled with tools like helm or kustomize where you store the deployment yamls and apply them to the cluster with a new version of the software. This helps in going back in history to dig out details when things break.
You can record the last executed command that changed the deployment with --record option. When using --record you would see the last executed command executed(change-cause) to the deployment metadata.annotations. You will not see this in your local yaml file but when you try to export the deployment as yaml then you will notice the change.
--record option like below
kubectl create deployment <deployment name> --image=<someimage> > testdelpoyment.yaml
kubectl create -f testdeployment.yaml --record
or
kubectl set image deployment/<deploymentname> imagename=newimagename:newversion --record

kubernetes gcp caching old image

I'm running GKE cluster and there is a deployment that uses image which I push to Container Registry on GCP, issue is - even though I build the image and push it with latest tag, the deployment keeps on creating new pods with the old one cached - is there a way to update it without re-deploying (aka without destroying it first)?
There is a known issue with the kubernetes that even if you change configmaps the old config remains and you can either redeploy or workaround with
kubectl patch deployment $deployment -n $ns -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
is there something similar with cached images?
I think you're looking for kubectl set or patch which I found there in kubernetes documentation.
To update image of deployment you can use kubectl set
kubectl set image deployment/name_of_deployment name_of_deployment=image:name_of_image
To update image of your pod you can use kubectl patch
kubectl patch pod name_of_pod -p '{"spec":{"containers":[{"name":"name_of_pod_from_yaml","image":"name_of_image"}]}}'
You can always use kubectl edit to edit which allows you to directly edit any API resource you can retrieve via the command line tool.
kubectl edit deployment name_of_deployment
Let me know if you have any more questions.
1) You should change the way of your thinking. Destroying pod is not bad. Application downtime is what is bad. You should always plan your deployments in such a way that it can tolerate one pod death. Use multiple replicas for stateless apps and use clusters for stateful apps. Use Kubernetes rolling update for any changes to your deployments. Rolling updates have many extremely important settings which directly influence the uptime of your apps. Read it carefully.
2) The reason why Kubernetes launches old image is that by default it uses
imagePullPolicy: IfNotPresent. Use imagePullPolicy: Always and it will always try to pull latest version on redeploy.

kubectl - Retrieving the current/"new" replicaset for a deployment in json format

We're trying to build some simple automation to detect if a deployment has finished a rolling update or not, or if a rolling update has failed. The naive way (which we do today) is to simply get all the pods for the deployment, and wait until they're ready. We look at all the pods, and if one of the pods have restarted 3 or more times (since we started the update), we roll back the update. This works fine most of the time. The times it doesn't, is when the current version that is deployed is in a failing state for any reason, so that the existing pods are continuously restarting, triggering a rollback from our end.
So the idea I had was to monitor the pods in the new replicaset that is being rolled out after I initiate a rolling update. This way we won't detect failing pods in the previous version as failures of the rolling update. I have found out how to find the pods within a replicaset (PS: I use powershell as my shell but hopefully the translation to bash or whatever you prefer is fairly straight forward):
kubectl get pod -n <namespace> -l <selector> -o json | ConvertFrom-Json | Where {$_.metadata.ownerReferences.name -eq <replicaset-name>}
And I can easily find which replicaset belongs to a deployment using the same method, just querying for replicasets and filtering on their ownerReference again.
HOWEVER, a deployment has at least two replicasets during a rolling update: The new, and the old. I can see the names of those if I use "kubectl describe deployment" -- but that's not very automation-friendly without some text processing. It seems like a fragile way to do it. It would be great if I could find the same information in the json, but seems that doesn't exist.
So, my question is: How do I find the connection between the deployments and the current/old replicaset, preferably in JSON format? Is there some other resource that "connects" these two, or is there some information on the replicasets themselves I can use to distinguish the new from the old replicaset?
The deployment will indicate the current "revision" of the replica set with the deployment.kubernetes.io/revision annotation.
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
This will exist on both the deployment and the replicaset. The replicaset with revision N-1 will be the "old" one.
The way used in kubectl source code is:
sort.Sort(replicaSetsByCreationTimestamp(rsList))
which means:
Select all replicaSets created by deployment;
Sort them by creationTimestamp;
Choose the most recent one, which is the current new replicaset;