Kubernetes Job with single Pod vs single Pod with restart policy OnFailure - kubernetes

What are the benefits of a Job with single Pod over just a single Pod with restart policy OnFailure to reliably execute once in kubernetes?
As discussed in Job being constanly recreated despite RestartPolicy: Never, in case of a Job a new Pod will be created endlessly in case container returned non-zero status. The same applies to a single OnFailure Pod, only this time no new pods are created which is even cleaner.
What are the cons and pros of either approach? Can Pod restart parameters, such as restart delay, or number of retry attempts can be controlled in either case?

The difference is that if a Job doesn't complete because the node that its pod was on went offline for some reason, then a new pod will be created to run on a different node. If a single pod doesn't complete because its node became unavailable, it won't be rescheduled onto a different node.

Related

What is default behavior of Kubernetes when pod crashes?

In Kubernetes deployment with 4 static pods and no autoscaling, what happens by default if one pod crashes? Will it be re-created automatically with the same ID/different ID or will the application continue running on 3 pods?
When a pod crashes, it will automatically be restarted. You will see this by the incrementing value of the pod's "Restarts" value when you do kubectl get pods
From the documentation: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template
Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not specified.
In other words, a deployment will ALWAYS restart your pod, regardless, and you cannot change that behaviour.
A restart will not change the name of the pod (or ID has you have called it)
The only time the pod name will change is if the pod gets deleted. This can happen during autoscaling processes or if the pod gets evicted from a node.
You've specified no autoscaling in your deployment, but if you have specified a value of 4 replicas, as I suspect you have, then the eviction will cause that one pod to change names, as it gets recreated by another node, in order to meet your request for 4 replica.
By "changing names" I just mean the hash at the end of the pod name will change. So your pod named my-test-g4gsv may be renamed to my-test-4dsv4 after it goes to a new node.
There is a backoff policy for restarts. So if Kubernetes detects a pod has been restarted repeatedly, it will start delaying its restart attempts. You will notice this as a CrashLoopBackoff value under the pod status (instead of Running). While in this state, the pod is not started, so during this time, your deployment is essentially running with reduced replicas until Kubernetes starts it.

pod - How to kill or stop only one pod from n replicas of a deployment

I have a testing scenario to check if the API requests are being handled by another pod if one goes down. I know this is the default behaviour, but I want to stimulate the following scenario.
Pod replicas - 2 (pod A and B)
During my API requests, I want to kill/stop only pod A.
During downtime of A, requests should be handled by B.
I am aware that we can restart the deployment and also scale replicas to 0 and again to 2, but this won't work for me.
Is there any way to kill/stop/crash only pod A?
Any help will be appreciated.
If you want to simulate what happens if one of the pods just gets lost, you can scale down the deployment
kubectl scale deployment the-deployment-name --replicas=1
and Kubernetes will terminate all but one of the pods; you should almost immediately see all of the traffic going to the surviving pod.
But if instead you want to simulate what happens if one of the pods crashes and restarts, you can delete the pod
# kubectl scale deployment the-deployment-name --replicas=2
kubectl get pods
kubectl delete pod the-deployment-name-12345-f7h9j
Once the pod starts getting deleted, the Kubernetes Service should route all of the traffic to the surviving pod(s) (those with Running status). However, the pod is managed by a ReplicaSet that wants there to be 2 replicas, so if one of the pods is deleted, the ReplicaSet will immediately create a new one. This is similar to what would happen if the pod crashes and restarts (in this scenario you'd get the same pod and the same node, if you delete the pod it might come back in a different place).
As you mentioned you can manually kill or restart the pod that is the only solution to test the case or else you can try crashing the one single POD but in the end, it will create the same scenario POD will auto restart.
Or else may you can increase the Graceful shutdown period for deployment so this way POD might take time and stay in terminating state for a good amount of time and you can perform the test.
In kubernetes where pods are controlled by the replicaSet, if you kill a pod it will again be recreated. So the only way to do this is to scale down the number of replicas.
Let's say if your deployment had 4 replicas. You can scale down to 3 by running the command below
kubectl scale deployment <deployment-name> --replicas=3
My example is as show below
kubectl scale deployment hello-world --replicas=3
deployment.apps/hello-world scaled

Kubernetes StatefulSet restart waits if one or more pods are not ready

I have a statefulset which constitutes of multiple pods. I have a use case where I need to invoke restart of the STS, I run this: kubectl rollout restart statefulset mysts
If I restart the statefulset at a time when one or more pods are in not-ready state, the restart action get queued up. Restart takes effect only after all the pods become ready. This could take long depending on the readiness threshold and the kind of issue the pod is facing.
Is there a way to force restart the statefulset, wherein I don't wait for pods to become ready? I don't want to terminate/delete the pods instead of restarting statefulset. A rolling restart works well for me as it helps avoid outage of the application.

Would Kubernetes bring up the down-ed Pod if only Pod definition file exists?

I have Pod definition file only. Kubernetes will bring up the pod. What happens if it goes down? Would Kubernetes bring it up automatically? Or if we want certain numbers of pods up at all time, we MUST take the help of ReplicationController( or ReplicaSet in new versions)?
Although your question is not clear , but yes , if you have deployed the pod through deployment or replicaSet , then kubernetes will create another one if you or someone else deletes that pod.
If you have just the pod without any controller like ReplicaSet , then it goes forever as there is no one to take care of it.
In case , the app crashes inside pod then:
A CrashloopBackOff means that you have a pod starting, crashing, starting again, and then crashing again.
A PodSpec has a restartPolicy field with possible values Always, OnFailure, and Never which applies to all containers in a pod. The default value is Always and the restartPolicy only refers to restarts of the containers by the kubelet on the same node (so the restart count will reset if the pod is rescheduled in a different node). Failed containers that are restarted by the kubelet are restarted with an exponential back-off delay (10s, 20s, 40s …) capped at five minutes, and is reset after ten minutes of successful execution.
https://sysdig.com/blog/debug-kubernetes-crashloopbackoff/
restartPolicy pod only refers to restarts of the Containers by the kubelet on the same node.If there is no replication controller or deployment then if a node goes down kubernetes will not reschedule or restart the pods of that node into any other nodes.This is the reason pods are not recommended to be used directly in production.

Deploying container as a CronJob to (Google) Kubernetes Engine - How to stop Pod after completing task

I have a container that runs some data fetching from a MySQL database and simply displays the result in console.log(), and want to run this as a cron job in GKE. So far I have the container working on my local machine, and have successfully deployed this to GKE (in terms of there being no errors thrown so far as I can see).
However, the pods that were created were just left as Running instead of stopping after completion of the task. Are the pods supposed to stop automatically after executing all the code, or do they require explicit instruction to stop and if so what is the command to terminate a pod after creation (by the Cron Job)?
I'm reading that there is supposedly some kind of termination grace period of ~30s by default, but after running a minutely-executed cronjob for ~20minutes, all the pods were still running. Not sure if there's a way to terminate the pods from inside the code, otherwise it would be a little silly to have a cronjob generating lots of pods left running idly..My cronjob.yaml below:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test
spec:
schedule: "5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: gcr.io/project/test:v1
# env:
# - name: "DELAY"
# value: 15
restartPolicy: OnFailure
A CronJob is essentially a cookie cutter for jobs. That is, it knows how to create jobs and execute them at a certain time. Now, that being said, when looking at garbage collection and clean up behaviour of a CronJob, we can simply look at what the Kubernetes docs have to say about this topic in the context of jobs:
When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. The job object also remains after it is completed so that you can view its status. It is up to the user to delete old jobs after noting their status. Delete the job with kubectl (e.g. kubectl delete jobs/pi or kubectl delete -f ./job.yaml).
Adding a process.kill(); line in the code to explicitly end the process after the code has finished executing allowed the pod to automatically stop after execution
A job in Kubernetes is intended to run a single instance of a pod and ensure it runs to completion. As another answer specifies, a CronJob is a factory for Jobs which knows how and when to spawn a job according to the specified schedule.
Accordingly, and unlike a service which is intended to run forever, the container(s) in the pod created by the pod must exit upon completion of the job. There is a notable problem with the sidecar pattern which often requires manual pod lifecycle handling; if your main pod requires additional pods to provide logging or database access, you must arrange for these to exit upon completion of the main pod, otherwise they will remain running and k8s will not consider the job complete. In such circumstances, the pod associated with the job will never terminate.
The termination grace period is not applicable here: this timer applies after Kubernetes has requested that your pod terminate (e.g. if you delete it). It specifies the maximum time the pod is afforded to shutdown gracefully before the kubelet will summarily terminate it. If Kubernetes never considers your job to be complete, this phase of the pod lifecycle will not be entered.
Furthermore, old pods are kept around after completion for some time to allow perusal of logs and such. You may see pods listed which are not actively running and so not consuming compute resources on your worker nodes.
If your pods are not completing, please provide more information regarding the code they are running so we can assist in determining why the process never exits.