Can I see a rollout in more detail? - kubernetes

I was doing a practice exam on the website killer.sh , and ran into a question I feel I did a hacky solution to. Given a deployment that has had a bad rollout, revert to the last revision that didn't have any issues. If I check a deployment's rollout history, for example with the command:
kubectl rollout history deployment mydep
I get a small table with version numbers, and "change-cause" commands. Is there any way to check the changes made to the deployment's yaml file for a specific revision? Because I was stumped in figuring out which specific revision didn't have the error inside of it.

Behind the scenes a Deployment creates a ReplicaSet that has its metadata.generation set to the REVISION you see in kubectl rollout history deployment mydep, so you can look at and diff old ReplicaSets associated with the Deployment.
On the other hand, being an eventually-consistent system, kubernetes has no notion of "good" or "bad" state, so it can't know what was the last successful deployment, for example; that's why deployment tools like helm, kapp etc. exist.

Kubernetes does not store more than what is necessary for it to operate and most of the time that is just the desired state because kubernetes is not a version control system.
This is preciously why you need to have a version control system coupled with tools like helm or kustomize where you store the deployment yamls and apply them to the cluster with a new version of the software. This helps in going back in history to dig out details when things break.

You can record the last executed command that changed the deployment with --record option. When using --record you would see the last executed command executed(change-cause) to the deployment metadata.annotations. You will not see this in your local yaml file but when you try to export the deployment as yaml then you will notice the change.
--record option like below
kubectl create deployment <deployment name> --image=<someimage> > testdelpoyment.yaml
kubectl create -f testdeployment.yaml --record
or
kubectl set image deployment/<deploymentname> imagename=newimagename:newversion --record

Related

How to update all resources that were created with kubernetes manifest?

I use a kubernetes manifest file to deploy my code. My manifest typically has a number of things like Deployment, Service, Ingress, etc.. How can I perform a type of "rollout" or "restart" of everything that was applied with my manifest?
I know I can update my deployment say by running
kubectl rollout restart deployment <deployment name>
but what if I need to update all resources like ingress/service? Can it all be done together?
I would recommend you to store your manifests, e.g. Deployment, Service and Ingress in a directory, e.g. <your-directory>
Then use kubectl apply to "apply" those files to Kubernetes, e.g.:
kubectl apply -f <directory>/
See more on Declarative Management of Kubernetes Objects Using Configuration Files.
When your Deployment is updated this way, your pods will be replaced with the new version during a rolling deployment (you can configure to use another deployment strategy).
This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.
As Burak Serdar has already suggested in his comment, you can simply use:
kubectl apply -f your-manifest.yaml
and it will apply all the changes you made in your manifest file to the resources, which are already deployed.
However note that running:
kubectl rollout restart -f your-manifest.yaml
makes not much sense as this file contains definitions of resources such as Services to which kubectl rollout restart cannot be applied. In consequence you'll see the following error:
$ kubectl rollout restart -f deployment-and-service.yaml
deployment.apps/my-nginx restarted
error: services "my-nginx" restarting is not supported
So as you can see it is perfectly possible to run kubectl rollout restart against a file that contains definitions of both resources that support this operation and those which do not support it.
Running kubectl apply instead will result in update of all the resources which definition has changed in your manifest:
$ kubectl apply -f deployment-and-service.yaml
deployment.apps/my-nginx configured
service/my-nginx configured

Usage of --record in kubectl imperative commands in Kubernetes

I tried to find useful information when should i use --record. I created 3 commands:
k set image deployment web1 nginx=lfccncf/nginx:latest --record
k rollout undo deployment/web1 --record
k -n kdpd00202 edit deployment web1 --record
Could anyone tell me if I need to use --record in each of these 3 commands?
When is it necessary to use --record and when is it useless?
Kubernetes desired state can be updated/mutated thru two paradigms :
Either imperatively using kubectl adhoc commands ( k set, k create, k run, k rollout ,..)
Or declaratively using YAML manifests with a single k apply
The declarative way is ideal for treating your k8s manifests as Code, then you can share this Code with the team, version it thru Git for example, and keep tracking its history leveraging GitOps practices ( branching models, Code Review, CI/CD ).
However, the imperative way cannot be reviewed by the team as these adhoc-commands will be run by an individual and no one else can easily find out the cause of the change after the change has been made.
To overcome the absence of an audit trail with imperative commands, the --record option is there to bind the root cause of the change as annotation called kubernetes.io/change-cause and the value of this annotation is the imperative command itself.
(note below is from the official doc)
Note: You can specify the --record flag to write the command executed in the resource annotation kubernetes.io/change-cause. The recorded change is useful for future introspection. For example, to see the commands executed in each Deployment revision.
As conclusion :
Theoretically ,--record is not mandatory
Practically, it's mandatory in order to ensure the changes leave a rudimentary audit trail behind and comply with SRE process and DevOps culture.
You can specify the --record flag to write the command executed in the resource annotation kubernetes.io/change-cause. The recorded change is useful for future introspection. For example, to see the commands executed in each Deployment revision.
kubectl rollout history deployment.v1.apps/nginx-deployment
The output is similar to this:
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true
2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true
3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 --record=true
So it's not mandatory for any of the commands and but is recommended for kubectl set image because you will not see anything in CHANGE-CAUSE section as above if you skip --record
--record flag also helps to see the details of the revision history, so rollback to a previous version also would be smoother.
When you don't append --record flag Change-Cause table will be just <none> in
kubectl rollout history
$ kubectl rollout history deployment/app
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>

How to "deploy" in kubernetes without any changes, just to get pods to cycle

What I am trying to do:
The app that runs in the Pod does some refreshing of its data files on start.
I need to restart the container each time I want to refresh the data.
(A refresh can take a few minutes, so I have a Probe checking for readiness.)
What I think is a solution:
I will run a scheduled job to do a rolling-update kind of deploy, which will take the old Pods out, one at a time and replace them, without downtime.
Where I'm stuck:
How do I trigger a deploy, if I haven't changed anything??
Also, I need to be able to do this from the scheduled job, obviously, so no manual editing..
Any other ways of doing this?
As of kubectl 1.15, you can run:
kubectl rollout restart deployment <deploymentname>
What this does internally, is patch the deployment with a kubectl.kubernetes.io/restartedAt annotation so the scheduler performs a rollout according to the deployment update strategy.
For previous versions of Kubernetes, you can simulate a similar thing:
kubectl set env deployment --env="LAST_MANUAL_RESTART=$(date +%s)" "deploymentname"
And even replace all in a single namespace:
kubectl set env --all deployment --env="LAST_MANUAL_RESTART=$(date +%s)" --namespace=...
According to documentation:
Note: a Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e. .spec.template) is changed, e.g. updating labels or container images of the template.
You can just use kubectl patch to update i.e. a label inside .spec.template.

Kubernetes deployments: Editing the 'spec' of a pod's YAML file fails

The env element added in spec.containers of a pod using K8 dashboard's Edit doesn't get saved. Does anyone know what the problem is?
Is there any other way to add environment variables to pods/containers?
I get this error when doing the same by editing the file using nano:
# pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
Thanks.
Not all fields can be updated. This fact is sometimes mentioned in the kubectl explain output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:
$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
If you deploy your Pods using a Deployment object, then you can change the environment variables in that object with kubectl edit since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.
Another option for you may be to use ConfigMaps. If you use the volume plugin method for mounting the ConfigMap and your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you).
We cannot edit env variables, resource limit, service account of a pod that is running live.
But definitely, we can edit/update image name, toleration and active deadline seconds,, etc.
However, the "deployment" can be easily edited because "pod" is a child template of deployment specification.
In order to "edit" the running pod with desired changes, the following approach can be used.
Extract the pod definition to a file, Make necessary changes, Delete the existing pod, and Create a new pod from the edited file:
kubectl get pod my-pod -o yaml > my-new-pod.yaml
vi my-new-pod.yaml
kubectl delete pod my-pod
kubectl create -f my-new-pod.yaml
Not sure about others but when I edited the pod YAML from google Kubernetes Engine workloads page, the same error came to me. But if I retry after some time it worked.
feels like some update was going on at the same time earlier, so I try to edit YAML fast and apply the changes and it worked.

Redeploying a Google Container Controller when the repository Image Changes

Is there any way for me to replicate the behavior I get on cloud.docker where a service can be redeployed either manually with the latest image or automatically when the repository image is updated?
Right now I'm doing something like this manually in a shell script with my controller and service files:
kubectl delete -f ./ticketing-controller.yaml || true
kubectl delete -f ./ticketing-service.yaml || true
kubectl create -f ./ticketing-controller.yaml
kubectl create -f ./ticketing-service.yaml
Even that seems a bit heavy handed, but works fine. I'm really missing the autoredeploy feature I have on cloud.docker.
Deleting the controller yaml file itself won't delete the actual controller in kubernetes unless you have a special configuration to do so. If you have more than 1 instance running, deleting the controller probably isn't what you would want because it would delete all the instances of your running application. What you really want to do is perform a rolling update of your application that incrementally replaces containers running the old image with containers running the new one.
You can do this manually by:
For a Deployment controller update the yaml file image and execute kubectl apply.
For a ReplicationController update the yaml file and execute kubectl rollingupdate. See: http://kubernetes.io/docs/user-guide/rolling-updates/
With v1.3 you will be able to use kubectl set image
Alternatively you could use a PaaS to automatically push the image when it is updated in the repo. Here is an incomplete list of a few Paas options:
Red Hat OpenShift
Spinnaker
Deis Workflow
According to Kubernetes documentation:
Let’s say you were running version 1.7.9 of nginx:
$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
deployment "my-nginx" created
To update to version 1.9.1, simply change
.spec.template.spec.containers[0].image from nginx:1.7.9 to
nginx:1.9.1, with the kubectl commands.
$ kubectl edit deployment/my-nginx
That’s it! The Deployment will declaratively update the deployed nginx
application progressively behind the scene. It ensures that only a
certain number of old replicas may be down while they are being
updated, and only a certain number of new replicas may be created
above the desired number of pods.