I'm running GKE cluster and there is a deployment that uses image which I push to Container Registry on GCP, issue is - even though I build the image and push it with latest tag, the deployment keeps on creating new pods with the old one cached - is there a way to update it without re-deploying (aka without destroying it first)?
There is a known issue with the kubernetes that even if you change configmaps the old config remains and you can either redeploy or workaround with
kubectl patch deployment $deployment -n $ns -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
is there something similar with cached images?
I think you're looking for kubectl set or patch which I found there in kubernetes documentation.
To update image of deployment you can use kubectl set
kubectl set image deployment/name_of_deployment name_of_deployment=image:name_of_image
To update image of your pod you can use kubectl patch
kubectl patch pod name_of_pod -p '{"spec":{"containers":[{"name":"name_of_pod_from_yaml","image":"name_of_image"}]}}'
You can always use kubectl edit to edit which allows you to directly edit any API resource you can retrieve via the command line tool.
kubectl edit deployment name_of_deployment
Let me know if you have any more questions.
1) You should change the way of your thinking. Destroying pod is not bad. Application downtime is what is bad. You should always plan your deployments in such a way that it can tolerate one pod death. Use multiple replicas for stateless apps and use clusters for stateful apps. Use Kubernetes rolling update for any changes to your deployments. Rolling updates have many extremely important settings which directly influence the uptime of your apps. Read it carefully.
2) The reason why Kubernetes launches old image is that by default it uses
imagePullPolicy: IfNotPresent. Use imagePullPolicy: Always and it will always try to pull latest version on redeploy.
Related
I am using a POD directly to manage our C* cluster in a K8s cluster, not using any high-level controller. When I want to upgrade C*, I want to do an image update. Is this a good pattern to update image for an upgrade?
I saw the high-level deployment controller support image update too, but that causes the POD to delete and recreate, which in turn causes the IP to change. I don't want to change the IP and I found if I directly update the POD image, it can cause a restart and also keep the IP. This is the exact behavior I want, is this pattern right?
Is it safe to use in production?
I believe you can follow the K8s documentation for a more 'production ready' upgrade strategy. Basically, use the updateStrategy=RollingUpdate:
$ kubectl patch statefulset cassandra -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}
and then update the image:
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cassandra:next-version"}]'
and watch your updates:
$ kubectl get pod -l app=cassandra -w
There's also Staging the Update in case you'd like to update each C* node individually, for example, why if the new version turns out to be incompatible, then you can revert that C* back to the original version.
Also, familiarize with all the Cassandra release notes before doing the upgrade.
I have a yaml file which I can use to create pods. I am using the dashboard so I can simply select yaml file and it will create pods. Pod will start the container and container will run the docker image. So now lets say I have done some changes in the docker image and want to deploy it again. For this, I will delete the already running pod and will upload the yaml file.
Instead of deleting and uploading yaml file again, is there any keyword available which will delete the already running pod/deployment and will recreate it.
Thanks
If you are using this for development you might get away with
containers:
- image: my/app:dev
imagePullPolicy: Always
With this, whenever your pod is recreated, you will get fresh image version.
That said, you need to use something like Deployment to have a pod restarted automaticaly, and then you can just kubectl delete my-pod-xxxxx-yyy to wipe old one and in few sec get the fresh, current one.
For prod, don't do that please. Just use tagged images and apply changed image to your Deployment with kubectl apply -f my.yaml or preferably something like Helm (but that is more complicated topic for starters)
I can't remember the StackOverflow question where I first saw this method, but here it is again:
kubectl --namespace thenamespace get pod thepod -o yaml | kubectl replace --save-config -f -
You can do that with all k8s resources.
What I am trying to do:
The app that runs in the Pod does some refreshing of its data files on start.
I need to restart the container each time I want to refresh the data.
(A refresh can take a few minutes, so I have a Probe checking for readiness.)
What I think is a solution:
I will run a scheduled job to do a rolling-update kind of deploy, which will take the old Pods out, one at a time and replace them, without downtime.
Where I'm stuck:
How do I trigger a deploy, if I haven't changed anything??
Also, I need to be able to do this from the scheduled job, obviously, so no manual editing..
Any other ways of doing this?
As of kubectl 1.15, you can run:
kubectl rollout restart deployment <deploymentname>
What this does internally, is patch the deployment with a kubectl.kubernetes.io/restartedAt annotation so the scheduler performs a rollout according to the deployment update strategy.
For previous versions of Kubernetes, you can simulate a similar thing:
kubectl set env deployment --env="LAST_MANUAL_RESTART=$(date +%s)" "deploymentname"
And even replace all in a single namespace:
kubectl set env --all deployment --env="LAST_MANUAL_RESTART=$(date +%s)" --namespace=...
According to documentation:
Note: a Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e. .spec.template) is changed, e.g. updating labels or container images of the template.
You can just use kubectl patch to update i.e. a label inside .spec.template.
Currenly when updating a kubernetes secrets file, in order to apply the changes, I need to run kubectl apply -f my-secrets.yaml. If there was a running container, it would still be using the old secrets. In order to apply the new secrets on the running container, I currently run the command kubectl replace -f my-pod.yaml .
I was wondering if this is the best way to update a running container secret, or am I missing something.
Thanks.
For k8s' versions >v1.15: kubectl rollout restart deployment $deploymentname: this will
restart pods incrementally without causing downtime.
The secret docs for users say this:
Mounted Secrets are updated automatically
When a secret being already consumed in a volume is updated, projected keys are eventually updated as well. The update time depends on the kubelet syncing period.
Mounted secrets are updated. The question is when. In case a the content of a secret is updated does not mean that your application automatically consumes it. It is the job of your application to watch file changes in this scenario to act accordingly. Having this in mind you currently need to do a little bit more work. One way I have in mind right now would be to run a scheduled job in Kubernetes which talks to the Kubernetes API to initiate a new rollout of your deployment. That way you could theoretically achieve what you want to renew your secrets. It is somehow not elegant, but this is the only way I have in mind at the moment. I still need to check more on the Kubernetes concepts myself. So please bear with me.
Assuming we have running pod mypod [mounted secret as mysecret in pod spec]
We can delete the existing secret
kubectl delete secret mysecret
recreate the same secret with updated file
kubectl create secret mysecret <updated file/s>
then do
kubectl apply -f ./mypod.yaml
check the secrets inside mypod, it will be updated.
In case anyone (like me) want to force rolling update pods which are using those secrets. From this issue, the trick is to update an Env variable inside the container, then k8s will automatically rolling update entire pods
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"mycontainer","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'
By design, Kubernetes won't push Secret updates to running Pods. If you want to update the Secret value for a Pod, you have to destroy and recreate the Pod. You can read more about it here.
Is there any way for me to replicate the behavior I get on cloud.docker where a service can be redeployed either manually with the latest image or automatically when the repository image is updated?
Right now I'm doing something like this manually in a shell script with my controller and service files:
kubectl delete -f ./ticketing-controller.yaml || true
kubectl delete -f ./ticketing-service.yaml || true
kubectl create -f ./ticketing-controller.yaml
kubectl create -f ./ticketing-service.yaml
Even that seems a bit heavy handed, but works fine. I'm really missing the autoredeploy feature I have on cloud.docker.
Deleting the controller yaml file itself won't delete the actual controller in kubernetes unless you have a special configuration to do so. If you have more than 1 instance running, deleting the controller probably isn't what you would want because it would delete all the instances of your running application. What you really want to do is perform a rolling update of your application that incrementally replaces containers running the old image with containers running the new one.
You can do this manually by:
For a Deployment controller update the yaml file image and execute kubectl apply.
For a ReplicationController update the yaml file and execute kubectl rollingupdate. See: http://kubernetes.io/docs/user-guide/rolling-updates/
With v1.3 you will be able to use kubectl set image
Alternatively you could use a PaaS to automatically push the image when it is updated in the repo. Here is an incomplete list of a few Paas options:
Red Hat OpenShift
Spinnaker
Deis Workflow
According to Kubernetes documentation:
Let’s say you were running version 1.7.9 of nginx:
$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
deployment "my-nginx" created
To update to version 1.9.1, simply change
.spec.template.spec.containers[0].image from nginx:1.7.9 to
nginx:1.9.1, with the kubectl commands.
$ kubectl edit deployment/my-nginx
That’s it! The Deployment will declaratively update the deployed nginx
application progressively behind the scene. It ensures that only a
certain number of old replicas may be down while they are being
updated, and only a certain number of new replicas may be created
above the desired number of pods.