How to upgrade an existing running deployment with yaml deployment file without changing the number of running replicas of that deployment?
So, I need to set the number of replicas on the fly without changing the yaml file.
It is like running kubectl apply -f deployment.yaml along with kubectl scale --replicas=3 both together, or in another wards to apply deployment yaml changes with keeping the numebr of running replicas the same as it is.
For example: I have a running deployment which already scaled its pods to 5 replicas, need to change deployment parameters within CD (like upgrade container image, change environment variabls, .. etc) without manualy check the #running pods and update the yaml with it, how can achieve this?
Use the kubectl edit command
kubectl edit (RESOURCE/NAME | -f FILENAME)
E.g. kubectl edit deployment.apps/webapp-deployment
It will open an editor. You can update the value for number of replicas in the editor and save.
Refer the documentation section - Editing resources
https://kubernetes.io/docs/reference/kubectl/cheatsheet/#editing-resources
Related
I have encountered a strange situation in one of our clusters, where all of a sudden a number of new pods have been created so that we end up with a greater number of running pods than the scale amount.
So in the dashboard it will show
serviceX pods: 8/2
and then 8 running instances of that service
Questions
How can this possibly happen?
Is there an easy way to get rid
of the extra pods (which all seem to be running)?
I have tried changing the scale amount in the dashboard and the extra pods do not disappear.
Both Pod and deployment are full-fledged objects in the Kubernetes API. Deployment manages creating Pods by means of ReplicaSets. What it boils down to is that Deployment will create Pods with spec taken from the template.
In your case deployment name edgeservicepublic-svc is set to have 13 replicas. Deployment is a kind of controller in Kubernetes. Its is naturally that this controller with continuously check if 13 pods are created. When a deployment is added to the cluster, it will automatically spin up the requested number of pods, and then monitor them. If a pod dies, the deployment will automatically re-create it. Probably at first not enough pods are created co controller with pursue to achieve desried number of them.
To make sure your deployment works properly you can delete deployment, make sure that that pods are deleted. Make sure that you haven't set up autoscaler ( $ kubectl get hpa ) if so, delete it. Then if you want to change deployment specification edit deployment configuration file and apply changes ($ kubectl apply -f deployment_configuration_file.yaml).
Useful documentation about deployment , autoscaling in context of GKE.
EDIT:
Basically at first place check autoscaler then delete it if it exists. I told you to delete deployment because you told that you try to change scale amount/ number of replicas. So if you want to be 100 % sure that changes are applied is to delete whole deployment end then recreate it with desired number of replicas. Of course you can just apply changes in deployment configuration file ($ kubectl edit ...) or ( $ kubectl apply -f ) but sometimes existing pods are not deleted so it will be saver. You could also create new deployment with the same parameters but different name.
I have a yaml file which I can use to create pods. I am using the dashboard so I can simply select yaml file and it will create pods. Pod will start the container and container will run the docker image. So now lets say I have done some changes in the docker image and want to deploy it again. For this, I will delete the already running pod and will upload the yaml file.
Instead of deleting and uploading yaml file again, is there any keyword available which will delete the already running pod/deployment and will recreate it.
Thanks
If you are using this for development you might get away with
containers:
- image: my/app:dev
imagePullPolicy: Always
With this, whenever your pod is recreated, you will get fresh image version.
That said, you need to use something like Deployment to have a pod restarted automaticaly, and then you can just kubectl delete my-pod-xxxxx-yyy to wipe old one and in few sec get the fresh, current one.
For prod, don't do that please. Just use tagged images and apply changed image to your Deployment with kubectl apply -f my.yaml or preferably something like Helm (but that is more complicated topic for starters)
I can't remember the StackOverflow question where I first saw this method, but here it is again:
kubectl --namespace thenamespace get pod thepod -o yaml | kubectl replace --save-config -f -
You can do that with all k8s resources.
So in order to update the images running on a pod, I have to modify the deployment config (yaml file), and run something like kubectl apply -f deploy.yaml.
This means, if I'm not editing the yaml file manually I'll have to use some template / search and replace functionality. Which isn't really ideal.
Are there any better approaches?
It seems there is a kubectl rolling-update command, but I'm not sure if this works for 'deployments'.
For example running the following: kubectl rolling-update wordpress --image=eu.gcr.io/abcxyz/wordpress:deploy-1502443760
Produces an error of:
error: couldn't find a replication controller with source id == default/wordpress
I am using this for changing images in Deployments:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
If you view the yaml files as source of truth then use a tag like stable in the yaml and only issue kubectl set image commands when the tag is moved (use the sha256 image id to actually trigger a rollout; the image names are matched like a string so updating from :stable to :stable is a noop even if the tag now points to a different image).
See updating a deployment for more details.
The above requires the deployment replica count to be set more then 1, which is explained here: https://stackoverflow.com/a/45649024/1663462.
What I am trying to do:
The app that runs in the Pod does some refreshing of its data files on start.
I need to restart the container each time I want to refresh the data.
(A refresh can take a few minutes, so I have a Probe checking for readiness.)
What I think is a solution:
I will run a scheduled job to do a rolling-update kind of deploy, which will take the old Pods out, one at a time and replace them, without downtime.
Where I'm stuck:
How do I trigger a deploy, if I haven't changed anything??
Also, I need to be able to do this from the scheduled job, obviously, so no manual editing..
Any other ways of doing this?
As of kubectl 1.15, you can run:
kubectl rollout restart deployment <deploymentname>
What this does internally, is patch the deployment with a kubectl.kubernetes.io/restartedAt annotation so the scheduler performs a rollout according to the deployment update strategy.
For previous versions of Kubernetes, you can simulate a similar thing:
kubectl set env deployment --env="LAST_MANUAL_RESTART=$(date +%s)" "deploymentname"
And even replace all in a single namespace:
kubectl set env --all deployment --env="LAST_MANUAL_RESTART=$(date +%s)" --namespace=...
According to documentation:
Note: a Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e. .spec.template) is changed, e.g. updating labels or container images of the template.
You can just use kubectl patch to update i.e. a label inside .spec.template.
Is there any way for me to replicate the behavior I get on cloud.docker where a service can be redeployed either manually with the latest image or automatically when the repository image is updated?
Right now I'm doing something like this manually in a shell script with my controller and service files:
kubectl delete -f ./ticketing-controller.yaml || true
kubectl delete -f ./ticketing-service.yaml || true
kubectl create -f ./ticketing-controller.yaml
kubectl create -f ./ticketing-service.yaml
Even that seems a bit heavy handed, but works fine. I'm really missing the autoredeploy feature I have on cloud.docker.
Deleting the controller yaml file itself won't delete the actual controller in kubernetes unless you have a special configuration to do so. If you have more than 1 instance running, deleting the controller probably isn't what you would want because it would delete all the instances of your running application. What you really want to do is perform a rolling update of your application that incrementally replaces containers running the old image with containers running the new one.
You can do this manually by:
For a Deployment controller update the yaml file image and execute kubectl apply.
For a ReplicationController update the yaml file and execute kubectl rollingupdate. See: http://kubernetes.io/docs/user-guide/rolling-updates/
With v1.3 you will be able to use kubectl set image
Alternatively you could use a PaaS to automatically push the image when it is updated in the repo. Here is an incomplete list of a few Paas options:
Red Hat OpenShift
Spinnaker
Deis Workflow
According to Kubernetes documentation:
Let’s say you were running version 1.7.9 of nginx:
$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
deployment "my-nginx" created
To update to version 1.9.1, simply change
.spec.template.spec.containers[0].image from nginx:1.7.9 to
nginx:1.9.1, with the kubectl commands.
$ kubectl edit deployment/my-nginx
That’s it! The Deployment will declaratively update the deployed nginx
application progressively behind the scene. It ensures that only a
certain number of old replicas may be down while they are being
updated, and only a certain number of new replicas may be created
above the desired number of pods.