Stopping all pods in Kubernetes cluster before running database migration job - kubernetes

I deploy my App into the Kubernetes cluster using Helm. App works with database, so i have to run db migrations before installing new version of the app. I run migrations with Kubernetes Job object using Helm "pre-upgrade" hook.
The problem is when the migration job starts old version pods are still working with database. They can block objects in database and because of that migration job may fail.
So, i want somehow to automatically stop all the pods in cluster before migration job starts. Is there any way to do that using Kubernetes + Helm? Will appreciate all the answers.

There are two ways I can see that you can do this.
First option is to scale down the pods before the deployment (for example, via Jenkins, CircleCI, GitLab CI, etc)
kubectl scale --replicas=0 -n {namespace} {deployment-name}
helm install .....
The second option (which might be easier depending on how you want to maintain this going forward) is to add an additional pre-upgrade hook with a higher priority than the migrations hook so it runs before the upgrade job; and then use that do the kubectl scale down.

Related

Application deployment over EKS using Jenkins

Can anyone tell me the deployment flow for deploying the application over Kubernetes or EKS cluster using Jenkins. How is the deployment files updated based on the change of the docker image. If we have multiple deployment files and we change any image for any one of them. Do all of them are redeployed?
Can anyone tell me the deployment flow for deploying the application over Kubernetes or EKS cluster using Jenkins.
Make sure that your Jenkins instance has an IAM Role and updated kubeconfig so that it can access the Kubernetes cluster. If you consider running the pipeline on the Kubernetes cluster, Jenkins X or Tekton Pipelines may be good alternatives that are better designed for Kubernetes.
How is the deployment files updated based on the change of the docker image.
It is a good practice to also keep the deployment manifest in version control, e.g. Git. This can be in the same repository or in a separate repository. For updating the image after a new image is built, consider using yq. An example yq command to update the image in a deployment manifest (one line):
yq write --inplace deployment.yaml 'spec.template.spec.containers(name==<myapp>).image' \
<my-registy-host>/<my-image-repository>/<my-image-name>:<my-tag-name>
If we have multiple deployment files and we change any image for any one of them. Do all of them are redeployed?
Nope, Kubernetes Yaml is declarative so it "understand" what is changed and only "drives" the necessary deployments to its "desired state" - since the other deployments already are in its "desired state".

Initcontainer vs Helm Hook post-install

What is a difference between Helm Hooks post-install and Kubernetes initcontainers? What I am understood is that Hooks are used to define some actions during different stages of Pod lifecycle, in that case - post-install, and initcontainers, on the other hand, allow to initialize the container before it is deployed to Pod.
post-install and initcontainer as I understand allow to do the same thing, i.e. initialize a database.
Is that correct? Which is the better approach?
For database setup I would prefer a Helm hook, but even then there are some subtleties.
Say your service is running as a Deployment with replicas: 3 for a little bit of redundancy. Every one of these replicas will run an init container, if it's specified in the pod spec, without any sort of synchronization. If one of the pods crashes, or its node fails, its replacement will run the init container again. For the sort of setup task you're talking about, you don't want to repeat it that often.
The fundamental difference here is that a Helm hook is a separate Kubernetes object, typically a Job. You can arrange for this Job to be run exactly once on each helm upgrade and at no other times, which makes it a reasonable place to run things like migrations.
The one important subtlety here is that you can have multiple versions of a service running at once. Take the preceding Deployment with replicas: 3, but then helm upgrade --set tag=something-newer. The Deployment controller will first start a new pod with the new image, and only once it's up and running will it tear down an old pod, and now you have both versions going together. Similar things will happen if you helm rollback to an older version. This means you need some tolerance for the database not quite having the right schema.
If the job is more like a "seed" job that preloads some initial data, this is easier to manage: do it in a post-install hook, which you expect to run only once ever. You don't need to repeat it on every upgrade (as a post-upgrade hook) or on every pod start (as an init container).
Helm install hook and initcontainer are fundamentally different.Install hooks in helm creates a completely separate pod altogether which means that pod will not have access to main pods directly using localhost or they cannot use same volume mount etc while initcontainer can do so.
Init container which is comparable to helm pre install hooks is limited in way because it can only do initial tasks before the main pod is started and can not do any tasks which need to be executed after the pod is started for example any clean up activity.
Initialization of DB etc needs to be done before the actual container is started and I think initcontainer is sufficient enough for this use case but a helm pre install hook can also be used.
You need to use post hook, since first you have to create a db pod and then initialize db. You will notice that a pod for post hook comes up after the db pod starts running. The post hook pod will be removed after the hook is executed.

How to run a script in a pod once, manually, using helm

I'm looking for the correct way to run a one-time maintenance script on my Kubernetes cluster.
I've got my deployment configured via Helm, so everything is bundled in my chart and works extremely well from an automation point of view.
Problem is running a script just once. I know Helm has hooks, but I don't think those can be configured to run manually (only pre/post upgrade/install etc.). This is compared to running kubectl apply -f my-maintenance-script.yaml, which I can do just once and be done with.
Is there a best-practice way of doing this? I want to be able to use Helm since I can feed all my config/template values into the Job.
You can use Kubernetes Job, and use helm test to run the Job.

kubernetes gcp caching old image

I'm running GKE cluster and there is a deployment that uses image which I push to Container Registry on GCP, issue is - even though I build the image and push it with latest tag, the deployment keeps on creating new pods with the old one cached - is there a way to update it without re-deploying (aka without destroying it first)?
There is a known issue with the kubernetes that even if you change configmaps the old config remains and you can either redeploy or workaround with
kubectl patch deployment $deployment -n $ns -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
is there something similar with cached images?
I think you're looking for kubectl set or patch which I found there in kubernetes documentation.
To update image of deployment you can use kubectl set
kubectl set image deployment/name_of_deployment name_of_deployment=image:name_of_image
To update image of your pod you can use kubectl patch
kubectl patch pod name_of_pod -p '{"spec":{"containers":[{"name":"name_of_pod_from_yaml","image":"name_of_image"}]}}'
You can always use kubectl edit to edit which allows you to directly edit any API resource you can retrieve via the command line tool.
kubectl edit deployment name_of_deployment
Let me know if you have any more questions.
1) You should change the way of your thinking. Destroying pod is not bad. Application downtime is what is bad. You should always plan your deployments in such a way that it can tolerate one pod death. Use multiple replicas for stateless apps and use clusters for stateful apps. Use Kubernetes rolling update for any changes to your deployments. Rolling updates have many extremely important settings which directly influence the uptime of your apps. Read it carefully.
2) The reason why Kubernetes launches old image is that by default it uses
imagePullPolicy: IfNotPresent. Use imagePullPolicy: Always and it will always try to pull latest version on redeploy.

Redeploying a Google Container Controller when the repository Image Changes

Is there any way for me to replicate the behavior I get on cloud.docker where a service can be redeployed either manually with the latest image or automatically when the repository image is updated?
Right now I'm doing something like this manually in a shell script with my controller and service files:
kubectl delete -f ./ticketing-controller.yaml || true
kubectl delete -f ./ticketing-service.yaml || true
kubectl create -f ./ticketing-controller.yaml
kubectl create -f ./ticketing-service.yaml
Even that seems a bit heavy handed, but works fine. I'm really missing the autoredeploy feature I have on cloud.docker.
Deleting the controller yaml file itself won't delete the actual controller in kubernetes unless you have a special configuration to do so. If you have more than 1 instance running, deleting the controller probably isn't what you would want because it would delete all the instances of your running application. What you really want to do is perform a rolling update of your application that incrementally replaces containers running the old image with containers running the new one.
You can do this manually by:
For a Deployment controller update the yaml file image and execute kubectl apply.
For a ReplicationController update the yaml file and execute kubectl rollingupdate. See: http://kubernetes.io/docs/user-guide/rolling-updates/
With v1.3 you will be able to use kubectl set image
Alternatively you could use a PaaS to automatically push the image when it is updated in the repo. Here is an incomplete list of a few Paas options:
Red Hat OpenShift
Spinnaker
Deis Workflow
According to Kubernetes documentation:
Let’s say you were running version 1.7.9 of nginx:
$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3
deployment "my-nginx" created
To update to version 1.9.1, simply change
.spec.template.spec.containers[0].image from nginx:1.7.9 to
nginx:1.9.1, with the kubectl commands.
$ kubectl edit deployment/my-nginx
That’s it! The Deployment will declaratively update the deployed nginx
application progressively behind the scene. It ensures that only a
certain number of old replicas may be down while they are being
updated, and only a certain number of new replicas may be created
above the desired number of pods.