How to update all resources that were created with kubernetes manifest? - kubernetes

I use a kubernetes manifest file to deploy my code. My manifest typically has a number of things like Deployment, Service, Ingress, etc.. How can I perform a type of "rollout" or "restart" of everything that was applied with my manifest?
I know I can update my deployment say by running
kubectl rollout restart deployment <deployment name>
but what if I need to update all resources like ingress/service? Can it all be done together?

I would recommend you to store your manifests, e.g. Deployment, Service and Ingress in a directory, e.g. <your-directory>
Then use kubectl apply to "apply" those files to Kubernetes, e.g.:
kubectl apply -f <directory>/
See more on Declarative Management of Kubernetes Objects Using Configuration Files.
When your Deployment is updated this way, your pods will be replaced with the new version during a rolling deployment (you can configure to use another deployment strategy).

This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.
As Burak Serdar has already suggested in his comment, you can simply use:
kubectl apply -f your-manifest.yaml
and it will apply all the changes you made in your manifest file to the resources, which are already deployed.
However note that running:
kubectl rollout restart -f your-manifest.yaml
makes not much sense as this file contains definitions of resources such as Services to which kubectl rollout restart cannot be applied. In consequence you'll see the following error:
$ kubectl rollout restart -f deployment-and-service.yaml
deployment.apps/my-nginx restarted
error: services "my-nginx" restarting is not supported
So as you can see it is perfectly possible to run kubectl rollout restart against a file that contains definitions of both resources that support this operation and those which do not support it.
Running kubectl apply instead will result in update of all the resources which definition has changed in your manifest:
$ kubectl apply -f deployment-and-service.yaml
deployment.apps/my-nginx configured
service/my-nginx configured

Related

How to write Kubernetes annotations to the underlying YAML files?

I am looking to apply existing annotations on a Kubernetes resource to the underlying YAML configuration files. For example, this command will successfully find all pods with a label of "app=helloworld" or "app=testapp" and annotate them with "xyz=test_anno":
kubectl annotate pods -l 'app in (helloworld, testapp)' xyz=test_anno
However, this only applies the annotations to the running pods and doesn't change the YAML files. How do I force those changes to the YAML files so they're permanent, either after the fact or as part of kubectl annotate to start with?
You could use the kubectl patch command with a little tricks
kubectl patch $(k get po -l 'app in (helloworld, testapp)') -p '{"metadata":{"annotations":{"xyz":"test_anno"}}}'

Can I see a rollout in more detail?

I was doing a practice exam on the website killer.sh , and ran into a question I feel I did a hacky solution to. Given a deployment that has had a bad rollout, revert to the last revision that didn't have any issues. If I check a deployment's rollout history, for example with the command:
kubectl rollout history deployment mydep
I get a small table with version numbers, and "change-cause" commands. Is there any way to check the changes made to the deployment's yaml file for a specific revision? Because I was stumped in figuring out which specific revision didn't have the error inside of it.
Behind the scenes a Deployment creates a ReplicaSet that has its metadata.generation set to the REVISION you see in kubectl rollout history deployment mydep, so you can look at and diff old ReplicaSets associated with the Deployment.
On the other hand, being an eventually-consistent system, kubernetes has no notion of "good" or "bad" state, so it can't know what was the last successful deployment, for example; that's why deployment tools like helm, kapp etc. exist.
Kubernetes does not store more than what is necessary for it to operate and most of the time that is just the desired state because kubernetes is not a version control system.
This is preciously why you need to have a version control system coupled with tools like helm or kustomize where you store the deployment yamls and apply them to the cluster with a new version of the software. This helps in going back in history to dig out details when things break.
You can record the last executed command that changed the deployment with --record option. When using --record you would see the last executed command executed(change-cause) to the deployment metadata.annotations. You will not see this in your local yaml file but when you try to export the deployment as yaml then you will notice the change.
--record option like below
kubectl create deployment <deployment name> --image=<someimage> > testdelpoyment.yaml
kubectl create -f testdeployment.yaml --record
or
kubectl set image deployment/<deploymentname> imagename=newimagename:newversion --record

Location of a kubernetes objects definition file

How to find the location of a kubernetes object's definition file.
I know the name of a kubernetes deployment and want to make some changes directly to its definition file instead of using 'kubernetes edit deployment '
The object definitions are stored internally in Kubernetes in replicated storage that's not directly accessible. If you do change an object definition, you would still need to trigger the rest of the Kubernetes update sequence when an object changes.
Typical practice is to keep the Kubernetes YAML files in source control. You can then edit these locally, and use kubectl apply -f to send them to the cluster. If you don't have them then you can run commands like kubectl get deployment depl-name -o yaml to get them out, and then check in the results to your source control repository.
If you really want to edit YAML definitions in an imperative, non-reproducible way, kubectl edit is the most direct thing you can do.
You could execute kubectl get deployment <deployment-name> -o yaml to get the deployment definition in a yaml format (or -o json to get in a json format), save that to a file, edit the file and apply the changes.
In a step-by-step guide would be:
Run kubectl get deployment deployment-name -o yaml > deployment-name.yaml
Edit and save the deployment-name.yaml using the editor of your preference
Run kubectl apply -f deployment-name.yaml to apply the changes
It's all stored in etcd
Nodes
Namespaces
ServiceAccounts
Roles and RoleBindings, ClusterRoles / ClusterRoleBindings
ConfigMaps
Secrets
Workloads: Deployments, DaemonSets, Pods, …
Cluster’s certificates
The resources within each apiVersion
The events that bring the cluster in the current state
Take a look at this blog post

What is the recommended alternative to kubectl '--generator' option?

One of the points in the kubectl best practices section in Kubernetes Docs state below:
Pin to a specific generator version, such as kubectl run
--generator=deployment/v1beta1
But then a little down in the doc, we get to learn that except for Pod, the use of --generator option is deprecated and that it would be removed in future versions.
Why is this being done? Doesn't generator make life easier in creating a template file for resource definition of deployment, service, and other resources? What alternative is the kubernetes team suggesting? This isn't there in the docs :(
kubectl create is the recommended alternative if you want to use more than just a pod (like deployment).
https://kubernetes.io/docs/reference/kubectl/conventions/#generators says:
Note: kubectl run --generator except for run-pod/v1 is deprecated in v1.12.
This pull request has the reason why generators (except run-pod/v1) were deprecated:
The direction is that we want to move away from kubectl run because it's over bloated and complicated for both users and developers. We want to mimic docker run with kubectl run so that it only creates a pod, and if you're interested in other resources kubectl create is the intended replacement.
For deployment you can try
kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
and
Note: kubectl run --generator except for run-pod/v1 is deprecated in v1.12.

Redeploying a Google Container Controller when the repository Image Changes

Is there any way for me to replicate the behavior I get on cloud.docker where a service can be redeployed either manually with the latest image or automatically when the repository image is updated?
Right now I'm doing something like this manually in a shell script with my controller and service files:
kubectl delete -f ./ticketing-controller.yaml || true
kubectl delete -f ./ticketing-service.yaml || true
kubectl create -f ./ticketing-controller.yaml
kubectl create -f ./ticketing-service.yaml
Even that seems a bit heavy handed, but works fine. I'm really missing the autoredeploy feature I have on cloud.docker.
Deleting the controller yaml file itself won't delete the actual controller in kubernetes unless you have a special configuration to do so. If you have more than 1 instance running, deleting the controller probably isn't what you would want because it would delete all the instances of your running application. What you really want to do is perform a rolling update of your application that incrementally replaces containers running the old image with containers running the new one.
You can do this manually by:
For a Deployment controller update the yaml file image and execute kubectl apply.
For a ReplicationController update the yaml file and execute kubectl rollingupdate. See: http://kubernetes.io/docs/user-guide/rolling-updates/
With v1.3 you will be able to use kubectl set image
Alternatively you could use a PaaS to automatically push the image when it is updated in the repo. Here is an incomplete list of a few Paas options:
Red Hat OpenShift
Spinnaker
Deis Workflow
According to Kubernetes documentation:
Let’s say you were running version 1.7.9 of nginx:
$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
deployment "my-nginx" created
To update to version 1.9.1, simply change
.spec.template.spec.containers[0].image from nginx:1.7.9 to
nginx:1.9.1, with the kubectl commands.
$ kubectl edit deployment/my-nginx
That’s it! The Deployment will declaratively update the deployed nginx
application progressively behind the scene. It ensures that only a
certain number of old replicas may be down while they are being
updated, and only a certain number of new replicas may be created
above the desired number of pods.