Pod image Update and restart container keep same IP question - kubernetes

I am using a POD directly to manage our C* cluster in a K8s cluster, not using any high-level controller. When I want to upgrade C*, I want to do an image update. Is this a good pattern to update image for an upgrade?
I saw the high-level deployment controller support image update too, but that causes the POD to delete and recreate, which in turn causes the IP to change. I don't want to change the IP and I found if I directly update the POD image, it can cause a restart and also keep the IP. This is the exact behavior I want, is this pattern right?
Is it safe to use in production?

I believe you can follow the K8s documentation for a more 'production ready' upgrade strategy. Basically, use the updateStrategy=RollingUpdate:
$ kubectl patch statefulset cassandra -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}
and then update the image:
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cassandra:next-version"}]'
and watch your updates:
$ kubectl get pod -l app=cassandra -w
There's also Staging the Update in case you'd like to update each C* node individually, for example, why if the new version turns out to be incompatible, then you can revert that C* back to the original version.
Also, familiarize with all the Cassandra release notes before doing the upgrade.

Related

kubernetes gcp caching old image

I'm running GKE cluster and there is a deployment that uses image which I push to Container Registry on GCP, issue is - even though I build the image and push it with latest tag, the deployment keeps on creating new pods with the old one cached - is there a way to update it without re-deploying (aka without destroying it first)?
There is a known issue with the kubernetes that even if you change configmaps the old config remains and you can either redeploy or workaround with
kubectl patch deployment $deployment -n $ns -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
is there something similar with cached images?
I think you're looking for kubectl set or patch which I found there in kubernetes documentation.
To update image of deployment you can use kubectl set
kubectl set image deployment/name_of_deployment name_of_deployment=image:name_of_image
To update image of your pod you can use kubectl patch
kubectl patch pod name_of_pod -p '{"spec":{"containers":[{"name":"name_of_pod_from_yaml","image":"name_of_image"}]}}'
You can always use kubectl edit to edit which allows you to directly edit any API resource you can retrieve via the command line tool.
kubectl edit deployment name_of_deployment
Let me know if you have any more questions.
1) You should change the way of your thinking. Destroying pod is not bad. Application downtime is what is bad. You should always plan your deployments in such a way that it can tolerate one pod death. Use multiple replicas for stateless apps and use clusters for stateful apps. Use Kubernetes rolling update for any changes to your deployments. Rolling updates have many extremely important settings which directly influence the uptime of your apps. Read it carefully.
2) The reason why Kubernetes launches old image is that by default it uses
imagePullPolicy: IfNotPresent. Use imagePullPolicy: Always and it will always try to pull latest version on redeploy.

Kubernetes CSI driver upgrade

We are developing k8s CSI driver
Currently in order to upgrade driver we delete the installed operator pods, cdrs and roles and recreate them from new version images.
What is suggested way to do upgrade? Or is uninstall/install is the suggested method?
I couldn't find any relevant information
We also have support of installing from OpenShift. Is there any difference regarding upgrade from OpenShift?
You should start from this documentation:
This page describes to CSI driver developers how to deploy their
driver onto a Kubernetes cluster.
Especially:
Deploying a CSI driver onto Kubernetes is highlighted in detail in
Recommended Mechanism for Deploying CSI Drivers on Kubernetes.
Also, you will find there all the necessary info with an example.
Your question lacks some details regarding your use case but I strongly recommend starting from the guide I have presented you.
Please, let me know if that helps.
CSI drivers can differ, but I believe the best approach is to do rolling update of your plugin's DaemonSet. It will happen automatically once you apply the new DaemonSet configuration, e.g. newer docker image.
For more details, see https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
For example:
kubectl get -n YOUR-NAMESPACE daemonset YOUR-DAEMONSET --export -o yaml > plugin.yaml
vi plugin.yaml # Update your image tag(s)
kubectl apply -n YOUR-NAMESPACE -f plugin.yaml
A shorted way to update just the image:
kubectl set image ds/YOUR-DAEMONSET-NAME YOUR-CONTAINER-NAME=YOUR-IMAGE-URL:YOUR-TAG -n YOUR-NAMESPACE
Note: I found that I also needed to restart (kill) the pod with the external provisioner. There's probably a more elegant way to handle this, but it works in a pinch.
kubectl delete pod -n YOUR-NAMESPACE YOUR-EXTERNAL-PROVISIONER-POD

How can we setup kubernetes to automatically change containers when a new one is pushed?

I'm using google cloud to store my Docker images & host my kubernetes cluster. I'm wondering how I can have kubernetes pull down the container which has the latest tag each time a new one is pushed.
I thought imagePullPolicy was the way to go, but it doesn't seem to be doing the job (I may be missing something). Here is my container spec:
"name": "blah",
"image": "gcr.io/project-id/container-name:latest",
"imagePullPolicy": "Always",
"env": [...]
At the moment I'm having to delete and recreate the deployments when I upload a new docker image.
Kubernetes it self will never trigger on container image update in repository. You need some sort of CI/CD pipeline in your tooling. Furthermore, I do strongly advise to avoid using :latest as it makes your container change over time. It is much better in my opinion to use some sort of versioning. Be it semantic like image:1.4.3 commit based image:<gitsha> or as I use image:<gitsha>-<pushid> where push is a sequentially updated value for each push to repo (so that label changes even if I reupload from the same build).
With such versioning, if you change image in your manifest, the deployment will get a rolling update as expected.
If you want to stick to image:latest, you can add a label with version to your pod template, so if you bump it, it will roll. You can also just kill pods manually one by one, or (if you can afford downtime) you can scale deployment to 0 replicas and back to N
Actually, you can patch your deployment so it can re-pull the spec part of your manifest (put your deployment name there)
kubectl patch deployment YOUR-DEPLOYMENT-NAME -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
Now, every time you push a new version in your container registry (DockerHub,ECR...), go to your kubernetes CLI and :
kubectl rollout restart deployment/YOUR-DEPLOYMENT-NAME

Redeploying a Google Container Controller when the repository Image Changes

Is there any way for me to replicate the behavior I get on cloud.docker where a service can be redeployed either manually with the latest image or automatically when the repository image is updated?
Right now I'm doing something like this manually in a shell script with my controller and service files:
kubectl delete -f ./ticketing-controller.yaml || true
kubectl delete -f ./ticketing-service.yaml || true
kubectl create -f ./ticketing-controller.yaml
kubectl create -f ./ticketing-service.yaml
Even that seems a bit heavy handed, but works fine. I'm really missing the autoredeploy feature I have on cloud.docker.
Deleting the controller yaml file itself won't delete the actual controller in kubernetes unless you have a special configuration to do so. If you have more than 1 instance running, deleting the controller probably isn't what you would want because it would delete all the instances of your running application. What you really want to do is perform a rolling update of your application that incrementally replaces containers running the old image with containers running the new one.
You can do this manually by:
For a Deployment controller update the yaml file image and execute kubectl apply.
For a ReplicationController update the yaml file and execute kubectl rollingupdate. See: http://kubernetes.io/docs/user-guide/rolling-updates/
With v1.3 you will be able to use kubectl set image
Alternatively you could use a PaaS to automatically push the image when it is updated in the repo. Here is an incomplete list of a few Paas options:
Red Hat OpenShift
Spinnaker
Deis Workflow
According to Kubernetes documentation:
Let’s say you were running version 1.7.9 of nginx:
$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
deployment "my-nginx" created
To update to version 1.9.1, simply change
.spec.template.spec.containers[0].image from nginx:1.7.9 to
nginx:1.9.1, with the kubectl commands.
$ kubectl edit deployment/my-nginx
That’s it! The Deployment will declaratively update the deployed nginx
application progressively behind the scene. It ensures that only a
certain number of old replicas may be down while they are being
updated, and only a certain number of new replicas may be created
above the desired number of pods.

Kubernetes rolling update for same image

Document of kubernetes says to do rolling update for a updated docker image. In my case I need to do rolling update for my pods using the same image. Is it possible to do rolling update of a replication controller for a same docker image?
In my experience, you cannot. If you try to (e.g., using the method George describes), you get the following error:
error: must specify a matching key with non-equal value in Selector for api
see 'kubectl rolling-update -h' for help.
The above with kubernetes v1.1.
Sure you can, Try this command:
$ kubectl rolling-update <rc name> --image=<image-name>:<tag>
If your image:tag has been used before, you may like to do following to make sure you get the latest image on kubernetes.
$ docker pull <image-name>:<tag>