The official kubernetes guidelines, instructs on updating the deployment either by performing a command line set:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
or by inline editing (that will launch the default editor I guess)
kubectl edit deployment/nginx-deployment
However both processes make consistency more difficult given that one needs to go and udpate offline the my-deployment.yml file, where the up & running deployment came from. (and this deprives one from the advantage of keeping their manifests version-controlled).
Is there a way to
launch a deployment via the file
perform (when needed) updates to the same file
update the deployment by pointing to the same, updated file?
You can do it simply by following steps -
Edit the deployment.yaml file
Run below command -
kubectl apply -f deployment.yaml
This is what I usually follow. You can use a kubectl patch or edit also.
Related
How to upgrade an existing running deployment with yaml deployment file without changing the number of running replicas of that deployment?
So, I need to set the number of replicas on the fly without changing the yaml file.
It is like running kubectl apply -f deployment.yaml along with kubectl scale --replicas=3 both together, or in another wards to apply deployment yaml changes with keeping the numebr of running replicas the same as it is.
For example: I have a running deployment which already scaled its pods to 5 replicas, need to change deployment parameters within CD (like upgrade container image, change environment variabls, .. etc) without manualy check the #running pods and update the yaml with it, how can achieve this?
Use the kubectl edit command
kubectl edit (RESOURCE/NAME | -f FILENAME)
E.g. kubectl edit deployment.apps/webapp-deployment
It will open an editor. You can update the value for number of replicas in the editor and save.
Refer the documentation section - Editing resources
https://kubernetes.io/docs/reference/kubectl/cheatsheet/#editing-resources
I was doing a practice exam on the website killer.sh , and ran into a question I feel I did a hacky solution to. Given a deployment that has had a bad rollout, revert to the last revision that didn't have any issues. If I check a deployment's rollout history, for example with the command:
kubectl rollout history deployment mydep
I get a small table with version numbers, and "change-cause" commands. Is there any way to check the changes made to the deployment's yaml file for a specific revision? Because I was stumped in figuring out which specific revision didn't have the error inside of it.
Behind the scenes a Deployment creates a ReplicaSet that has its metadata.generation set to the REVISION you see in kubectl rollout history deployment mydep, so you can look at and diff old ReplicaSets associated with the Deployment.
On the other hand, being an eventually-consistent system, kubernetes has no notion of "good" or "bad" state, so it can't know what was the last successful deployment, for example; that's why deployment tools like helm, kapp etc. exist.
Kubernetes does not store more than what is necessary for it to operate and most of the time that is just the desired state because kubernetes is not a version control system.
This is preciously why you need to have a version control system coupled with tools like helm or kustomize where you store the deployment yamls and apply them to the cluster with a new version of the software. This helps in going back in history to dig out details when things break.
You can record the last executed command that changed the deployment with --record option. When using --record you would see the last executed command executed(change-cause) to the deployment metadata.annotations. You will not see this in your local yaml file but when you try to export the deployment as yaml then you will notice the change.
--record option like below
kubectl create deployment <deployment name> --image=<someimage> > testdelpoyment.yaml
kubectl create -f testdeployment.yaml --record
or
kubectl set image deployment/<deploymentname> imagename=newimagename:newversion --record
Kubernetes newbie question: Can I somehow include my pod definition inside my deployment definition?
I currently have a pod.yml, a service.yml and a deployment.yml file. Both in pod.yml and deployment.yml I specify my docker image and tag: user/image:v1.
To do a rolling update, I tried doing kubectl set image deployment/api api=user/image:v2
However, that doesnt work alone.. It seems to conflict with the image tag in the pod definition. I need to also update the pod with tag v2 for kubectl set image to work.. I feel like I'm doing something wrong. Thoughts?
Yes, you can include all definitions in one file. Have a look at the guestbook-all-in-one.yaml example.
The recommended way to do a rolling update is to change the file and then use apply:
$ vim guestbook-all-in-one.yaml # make the desired changes
$ kubectl apply -f guestbook-all-in-one.yaml # apply these changes
If possible, you should also have this file under version control, so that the file with the current status is always easily accessible.
So in order to update the images running on a pod, I have to modify the deployment config (yaml file), and run something like kubectl apply -f deploy.yaml.
This means, if I'm not editing the yaml file manually I'll have to use some template / search and replace functionality. Which isn't really ideal.
Are there any better approaches?
It seems there is a kubectl rolling-update command, but I'm not sure if this works for 'deployments'.
For example running the following: kubectl rolling-update wordpress --image=eu.gcr.io/abcxyz/wordpress:deploy-1502443760
Produces an error of:
error: couldn't find a replication controller with source id == default/wordpress
I am using this for changing images in Deployments:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
If you view the yaml files as source of truth then use a tag like stable in the yaml and only issue kubectl set image commands when the tag is moved (use the sha256 image id to actually trigger a rollout; the image names are matched like a string so updating from :stable to :stable is a noop even if the tag now points to a different image).
See updating a deployment for more details.
The above requires the deployment replica count to be set more then 1, which is explained here: https://stackoverflow.com/a/45649024/1663462.
Is there any way for me to replicate the behavior I get on cloud.docker where a service can be redeployed either manually with the latest image or automatically when the repository image is updated?
Right now I'm doing something like this manually in a shell script with my controller and service files:
kubectl delete -f ./ticketing-controller.yaml || true
kubectl delete -f ./ticketing-service.yaml || true
kubectl create -f ./ticketing-controller.yaml
kubectl create -f ./ticketing-service.yaml
Even that seems a bit heavy handed, but works fine. I'm really missing the autoredeploy feature I have on cloud.docker.
Deleting the controller yaml file itself won't delete the actual controller in kubernetes unless you have a special configuration to do so. If you have more than 1 instance running, deleting the controller probably isn't what you would want because it would delete all the instances of your running application. What you really want to do is perform a rolling update of your application that incrementally replaces containers running the old image with containers running the new one.
You can do this manually by:
For a Deployment controller update the yaml file image and execute kubectl apply.
For a ReplicationController update the yaml file and execute kubectl rollingupdate. See: http://kubernetes.io/docs/user-guide/rolling-updates/
With v1.3 you will be able to use kubectl set image
Alternatively you could use a PaaS to automatically push the image when it is updated in the repo. Here is an incomplete list of a few Paas options:
Red Hat OpenShift
Spinnaker
Deis Workflow
According to Kubernetes documentation:
Let’s say you were running version 1.7.9 of nginx:
$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
deployment "my-nginx" created
To update to version 1.9.1, simply change
.spec.template.spec.containers[0].image from nginx:1.7.9 to
nginx:1.9.1, with the kubectl commands.
$ kubectl edit deployment/my-nginx
That’s it! The Deployment will declaratively update the deployed nginx
application progressively behind the scene. It ensures that only a
certain number of old replicas may be down while they are being
updated, and only a certain number of new replicas may be created
above the desired number of pods.