Kubernetes rolling update for same image - kubernetes

Document of kubernetes says to do rolling update for a updated docker image. In my case I need to do rolling update for my pods using the same image. Is it possible to do rolling update of a replication controller for a same docker image?

In my experience, you cannot. If you try to (e.g., using the method George describes), you get the following error:
error: must specify a matching key with non-equal value in Selector for api
see 'kubectl rolling-update -h' for help.
The above with kubernetes v1.1.

Sure you can, Try this command:
$ kubectl rolling-update <rc name> --image=<image-name>:<tag>
If your image:tag has been used before, you may like to do following to make sure you get the latest image on kubernetes.
$ docker pull <image-name>:<tag>

Related

unable to create a pv due to VOL_DIR: parameter not set

I'm running rke2 version v1.22.7+rke2r2 in 3 nodes. Today I decide to reinstall my application and I'm not able to do it anymore due to a problem in claiming PV.
I have had never this problems before, and I think is due to an update on local-path-provisioner but I'm not sure I'm still a newbie about kube.
Anyway these are the commands I run before installing my solution:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
I omitted metallb. Then as a test I try to install the test specified in the local-path-provisioner website (https://github.com/rancher/local-path-provisioner):
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pvc/pvc.yaml
kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/examples/pod/pod.yaml
What I see is that the pvc stays in a PENDING status, then I check the pod creation in local-path-storage namespace and I see that the helper-pod-create-pvc-xxxx goes in error.
I try to get some logs and the only thing I was able to grab is this:
kubectl -n local-path-storage logs helper-pod-create-pvc-dd8cecf3-d65b-48f7-9e04-d56a20573f8e -f
/script/setup: line 3: VOL_DIR: parameter not set
So it seems VOL_DIR is not set for whatever reason. But I never did a custom configuration, it always starts without problem, and to be honest I don't know what put in VOL_DIR env variable and where.
I just answer to my question. It seems to be a bug on local-path-provisioner
they are fixing it.
In the meantime, instead of using the last one present in the master that has the bug, please use 0.0.21, like this:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.21/deploy/local-path-storage.yaml
I tested and it works fine.
The deploy manifest in master branch is already fixed.
The master branch is for development, so please use the v0.0.x (e.g v0.0.21, stable release) for production use.

kubernetes gcp caching old image

I'm running GKE cluster and there is a deployment that uses image which I push to Container Registry on GCP, issue is - even though I build the image and push it with latest tag, the deployment keeps on creating new pods with the old one cached - is there a way to update it without re-deploying (aka without destroying it first)?
There is a known issue with the kubernetes that even if you change configmaps the old config remains and you can either redeploy or workaround with
kubectl patch deployment $deployment -n $ns -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
is there something similar with cached images?
I think you're looking for kubectl set or patch which I found there in kubernetes documentation.
To update image of deployment you can use kubectl set
kubectl set image deployment/name_of_deployment name_of_deployment=image:name_of_image
To update image of your pod you can use kubectl patch
kubectl patch pod name_of_pod -p '{"spec":{"containers":[{"name":"name_of_pod_from_yaml","image":"name_of_image"}]}}'
You can always use kubectl edit to edit which allows you to directly edit any API resource you can retrieve via the command line tool.
kubectl edit deployment name_of_deployment
Let me know if you have any more questions.
1) You should change the way of your thinking. Destroying pod is not bad. Application downtime is what is bad. You should always plan your deployments in such a way that it can tolerate one pod death. Use multiple replicas for stateless apps and use clusters for stateful apps. Use Kubernetes rolling update for any changes to your deployments. Rolling updates have many extremely important settings which directly influence the uptime of your apps. Read it carefully.
2) The reason why Kubernetes launches old image is that by default it uses
imagePullPolicy: IfNotPresent. Use imagePullPolicy: Always and it will always try to pull latest version on redeploy.

kubectl set image error: arguments in resource/name form may not have more than one slash (kubernetes)

I want to deploy my project to the Kubernetes cluster. I want to deploy it by using command:
- kubectl set image deployment/$CLUSTER_NAME gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest
But here I get error :
It's a misleading error message.
Essentially instead of abcxyz/abcxyz:example you also need to specify the container name that the image should be assigned to so for example example=abcxyz/abcxyz:example.
It's quite complicated and misleading, I got to say. Public docs don't help much but the kubectl set image --help does.
The problem is that you might have MULTIPLE instances in the deployment. If you have only one you can do something like this (note that this works but it's not AS SPECIFIC as you might want):
# The part before = is the spec.template.spec.containers.name which is image's brother
kubectl set image deployments goliardiait-staging=gcr.io/goliardia-prod/goliardia-it-matrioska:2.12 --all
I'll update when I find what nails it. In your case:
kubectl set image deployment $CLUSTER_NAME=gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest --all
- kubectl set image deployment/$CLUSTER_NAME $INSTANSE_NAME=gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest
It is working with using command like this

Pod image Update and restart container keep same IP question

I am using a POD directly to manage our C* cluster in a K8s cluster, not using any high-level controller. When I want to upgrade C*, I want to do an image update. Is this a good pattern to update image for an upgrade?
I saw the high-level deployment controller support image update too, but that causes the POD to delete and recreate, which in turn causes the IP to change. I don't want to change the IP and I found if I directly update the POD image, it can cause a restart and also keep the IP. This is the exact behavior I want, is this pattern right?
Is it safe to use in production?
I believe you can follow the K8s documentation for a more 'production ready' upgrade strategy. Basically, use the updateStrategy=RollingUpdate:
$ kubectl patch statefulset cassandra -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}
and then update the image:
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cassandra:next-version"}]'
and watch your updates:
$ kubectl get pod -l app=cassandra -w
There's also Staging the Update in case you'd like to update each C* node individually, for example, why if the new version turns out to be incompatible, then you can revert that C* back to the original version.
Also, familiarize with all the Cassandra release notes before doing the upgrade.

Using kubectl set image to update image of initContainer

Currently, to update a k8s deployment image, we use the kubectl set image command like this:
kubectl set image deployment/deployment_name container=url_to_container
While this command updates the URL used for the main container in the deployment, it does not update the URL for the initContainer also set within the deployment.
Is there a similar kubectl command I can use to update the initContainer to the same URL?
Since the accepted answer, the team developed the ability to set image for Kubernetes init containers. Using the same command, simply use the init container name for the container part of the command you supplied.
kubectl set image deployment/deployment_name myInitContainer=url_to_container
In case you want to update both container images in a single command, use:
kubectl set image deployment/deployment_name myInitContainer=url_to_container container=url_to_container
The documentation seems to suggests that only containers are concerned.
Maybe you could switch to kubectl patch ?
(I know it's more tedious...)
kubectl patch deployment/deployment_name --patch "{\"spec\": {\"template\": {\"spec\": {\"initContainers\": [{\"name\": \"container_name\",\"image\": \"url_to_container\"}]}}}}"
The snippet above is based on solution provided by #Hiruma. Therefore, here I removed the empty spaces on the json argument and added a namespace example.
I did that because I faced issues when I was writing a pipeline for drone.io.
kubectl patch deployment deployment_name -n namespace_name -p "{\"spec\":{\"template\":{\"spec\":{\"initContainers\":[{\"name\":\"deployment_name\",\"image\":\"url_to_container"}]}}}}"