Usage of --record in kubectl imperative commands in Kubernetes - kubernetes

I tried to find useful information when should i use --record. I created 3 commands:
k set image deployment web1 nginx=lfccncf/nginx:latest --record
k rollout undo deployment/web1 --record
k -n kdpd00202 edit deployment web1 --record
Could anyone tell me if I need to use --record in each of these 3 commands?
When is it necessary to use --record and when is it useless?

Kubernetes desired state can be updated/mutated thru two paradigms :
Either imperatively using kubectl adhoc commands ( k set, k create, k run, k rollout ,..)
Or declaratively using YAML manifests with a single k apply
The declarative way is ideal for treating your k8s manifests as Code, then you can share this Code with the team, version it thru Git for example, and keep tracking its history leveraging GitOps practices ( branching models, Code Review, CI/CD ).
However, the imperative way cannot be reviewed by the team as these adhoc-commands will be run by an individual and no one else can easily find out the cause of the change after the change has been made.
To overcome the absence of an audit trail with imperative commands, the --record option is there to bind the root cause of the change as annotation called kubernetes.io/change-cause and the value of this annotation is the imperative command itself.
(note below is from the official doc)
Note: You can specify the --record flag to write the command executed in the resource annotation kubernetes.io/change-cause. The recorded change is useful for future introspection. For example, to see the commands executed in each Deployment revision.
As conclusion :
Theoretically ,--record is not mandatory
Practically, it's mandatory in order to ensure the changes leave a rudimentary audit trail behind and comply with SRE process and DevOps culture.

You can specify the --record flag to write the command executed in the resource annotation kubernetes.io/change-cause. The recorded change is useful for future introspection. For example, to see the commands executed in each Deployment revision.
kubectl rollout history deployment.v1.apps/nginx-deployment
The output is similar to this:
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true
2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true
3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 --record=true
So it's not mandatory for any of the commands and but is recommended for kubectl set image because you will not see anything in CHANGE-CAUSE section as above if you skip --record

--record flag also helps to see the details of the revision history, so rollback to a previous version also would be smoother.
When you don't append --record flag Change-Cause table will be just <none> in
kubectl rollout history
$ kubectl rollout history deployment/app
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>

Related

kubectl rollout does not provide an image flag

I am trying to implement a rollback strategy for deployments using a CI pipeline. My goal is to rollback to a specific image version rather than using the generic black-box revision number of kubernetes rolloutsubcommand. Unfortunately, Kubernetes default rollback flags only provide --to-revision flag to select a revision number which doesn't give you any information about what image that revision is using.
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
Is there a way to know what image and tag that revision using? The REVISION numbers on doesn't tell much and will mean that I've to keep track in whatever way to know what image version is in each revision number. Overriding CHANGE-CAUSE to add more informatin in the annotations of the deployment I understand is not advisable
REVISION CHANGE-CAUSE
1 kubectl create -f deployment.yaml --record
2 kubectl set image deployment/httpd-deployment httpd=httpd:1.9.1
3 kubectl set image deployment/htppd-deployment httpd=nginx:1.91
I am looking for something like this in my CI pipeline with --to-image sort of flag
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2 --to-image=httpd:1.7
Knowing that --to-image flag doesn't exist, will that mean me just totally abandoning the rollout command and just redeploying with a previous image like below. (I'm thinking this is not actually harnessing rollout capability)
$ kubectl set image deployment/httpd-deployment httpd=httpd:1.9.1
Any better way to implement Kubernetes Rollback to a previous specific deployment image?

Configure resource limits in a pod via cli

I am trying to run a pod where the default limit of my GKE AutoPilot Cluster is 500m vCPU.. But I want to run all my pods by 250m vCPU only. I tried the command kubectl run pod-0 --requests="cpu=150m" --restart=Never --image=example/only
But I get a warning: Flag --requests has been deprecated, has no effect and will be removed in the future. Then when I describe my pod, it sets 500m. I would like to know option to set resource limits with just plainly kubectl run pod
Since kubectl v1.21 All generators are depricated.
Github issue: Remove generators from kubectl commands quote:
kubectl command are going to get rid of the dependency to the
generators, and ultimately remove them entirely. The main goal is to
simplyfy the use of kubectl command. This is already done in kubectl
run, see #87077 and #68132.
So it looks like --limits and --requests flags won't longer be available in the future.
Here is the PR that did it: Drop deprecated run flags and deprecate unused ones.

How to update all resources that were created with kubernetes manifest?

I use a kubernetes manifest file to deploy my code. My manifest typically has a number of things like Deployment, Service, Ingress, etc.. How can I perform a type of "rollout" or "restart" of everything that was applied with my manifest?
I know I can update my deployment say by running
kubectl rollout restart deployment <deployment name>
but what if I need to update all resources like ingress/service? Can it all be done together?
I would recommend you to store your manifests, e.g. Deployment, Service and Ingress in a directory, e.g. <your-directory>
Then use kubectl apply to "apply" those files to Kubernetes, e.g.:
kubectl apply -f <directory>/
See more on Declarative Management of Kubernetes Objects Using Configuration Files.
When your Deployment is updated this way, your pods will be replaced with the new version during a rolling deployment (you can configure to use another deployment strategy).
This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.
As Burak Serdar has already suggested in his comment, you can simply use:
kubectl apply -f your-manifest.yaml
and it will apply all the changes you made in your manifest file to the resources, which are already deployed.
However note that running:
kubectl rollout restart -f your-manifest.yaml
makes not much sense as this file contains definitions of resources such as Services to which kubectl rollout restart cannot be applied. In consequence you'll see the following error:
$ kubectl rollout restart -f deployment-and-service.yaml
deployment.apps/my-nginx restarted
error: services "my-nginx" restarting is not supported
So as you can see it is perfectly possible to run kubectl rollout restart against a file that contains definitions of both resources that support this operation and those which do not support it.
Running kubectl apply instead will result in update of all the resources which definition has changed in your manifest:
$ kubectl apply -f deployment-and-service.yaml
deployment.apps/my-nginx configured
service/my-nginx configured

Can I see a rollout in more detail?

I was doing a practice exam on the website killer.sh , and ran into a question I feel I did a hacky solution to. Given a deployment that has had a bad rollout, revert to the last revision that didn't have any issues. If I check a deployment's rollout history, for example with the command:
kubectl rollout history deployment mydep
I get a small table with version numbers, and "change-cause" commands. Is there any way to check the changes made to the deployment's yaml file for a specific revision? Because I was stumped in figuring out which specific revision didn't have the error inside of it.
Behind the scenes a Deployment creates a ReplicaSet that has its metadata.generation set to the REVISION you see in kubectl rollout history deployment mydep, so you can look at and diff old ReplicaSets associated with the Deployment.
On the other hand, being an eventually-consistent system, kubernetes has no notion of "good" or "bad" state, so it can't know what was the last successful deployment, for example; that's why deployment tools like helm, kapp etc. exist.
Kubernetes does not store more than what is necessary for it to operate and most of the time that is just the desired state because kubernetes is not a version control system.
This is preciously why you need to have a version control system coupled with tools like helm or kustomize where you store the deployment yamls and apply them to the cluster with a new version of the software. This helps in going back in history to dig out details when things break.
You can record the last executed command that changed the deployment with --record option. When using --record you would see the last executed command executed(change-cause) to the deployment metadata.annotations. You will not see this in your local yaml file but when you try to export the deployment as yaml then you will notice the change.
--record option like below
kubectl create deployment <deployment name> --image=<someimage> > testdelpoyment.yaml
kubectl create -f testdeployment.yaml --record
or
kubectl set image deployment/<deploymentname> imagename=newimagename:newversion --record

How to "deploy" in kubernetes without any changes, just to get pods to cycle

What I am trying to do:
The app that runs in the Pod does some refreshing of its data files on start.
I need to restart the container each time I want to refresh the data.
(A refresh can take a few minutes, so I have a Probe checking for readiness.)
What I think is a solution:
I will run a scheduled job to do a rolling-update kind of deploy, which will take the old Pods out, one at a time and replace them, without downtime.
Where I'm stuck:
How do I trigger a deploy, if I haven't changed anything??
Also, I need to be able to do this from the scheduled job, obviously, so no manual editing..
Any other ways of doing this?
As of kubectl 1.15, you can run:
kubectl rollout restart deployment <deploymentname>
What this does internally, is patch the deployment with a kubectl.kubernetes.io/restartedAt annotation so the scheduler performs a rollout according to the deployment update strategy.
For previous versions of Kubernetes, you can simulate a similar thing:
kubectl set env deployment --env="LAST_MANUAL_RESTART=$(date +%s)" "deploymentname"
And even replace all in a single namespace:
kubectl set env --all deployment --env="LAST_MANUAL_RESTART=$(date +%s)" --namespace=...
According to documentation:
Note: a Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e. .spec.template) is changed, e.g. updating labels or container images of the template.
You can just use kubectl patch to update i.e. a label inside .spec.template.