Would Kubernetes bring up the down-ed Pod if only Pod definition file exists? - kubernetes

I have Pod definition file only. Kubernetes will bring up the pod. What happens if it goes down? Would Kubernetes bring it up automatically? Or if we want certain numbers of pods up at all time, we MUST take the help of ReplicationController( or ReplicaSet in new versions)?

Although your question is not clear , but yes , if you have deployed the pod through deployment or replicaSet , then kubernetes will create another one if you or someone else deletes that pod.
If you have just the pod without any controller like ReplicaSet , then it goes forever as there is no one to take care of it.
In case , the app crashes inside pod then:
A CrashloopBackOff means that you have a pod starting, crashing, starting again, and then crashing again.
A PodSpec has a restartPolicy field with possible values Always, OnFailure, and Never which applies to all containers in a pod. The default value is Always and the restartPolicy only refers to restarts of the containers by the kubelet on the same node (so the restart count will reset if the pod is rescheduled in a different node). Failed containers that are restarted by the kubelet are restarted with an exponential back-off delay (10s, 20s, 40s …) capped at five minutes, and is reset after ten minutes of successful execution.
https://sysdig.com/blog/debug-kubernetes-crashloopbackoff/

restartPolicy pod only refers to restarts of the Containers by the kubelet on the same node.If there is no replication controller or deployment then if a node goes down kubernetes will not reschedule or restart the pods of that node into any other nodes.This is the reason pods are not recommended to be used directly in production.

Related

What is default behavior of Kubernetes when pod crashes?

In Kubernetes deployment with 4 static pods and no autoscaling, what happens by default if one pod crashes? Will it be re-created automatically with the same ID/different ID or will the application continue running on 3 pods?
When a pod crashes, it will automatically be restarted. You will see this by the incrementing value of the pod's "Restarts" value when you do kubectl get pods
From the documentation: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template
Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not specified.
In other words, a deployment will ALWAYS restart your pod, regardless, and you cannot change that behaviour.
A restart will not change the name of the pod (or ID has you have called it)
The only time the pod name will change is if the pod gets deleted. This can happen during autoscaling processes or if the pod gets evicted from a node.
You've specified no autoscaling in your deployment, but if you have specified a value of 4 replicas, as I suspect you have, then the eviction will cause that one pod to change names, as it gets recreated by another node, in order to meet your request for 4 replica.
By "changing names" I just mean the hash at the end of the pod name will change. So your pod named my-test-g4gsv may be renamed to my-test-4dsv4 after it goes to a new node.
There is a backoff policy for restarts. So if Kubernetes detects a pod has been restarted repeatedly, it will start delaying its restart attempts. You will notice this as a CrashLoopBackoff value under the pod status (instead of Running). While in this state, the pod is not started, so during this time, your deployment is essentially running with reduced replicas until Kubernetes starts it.

How to automatically force delete pods stuck in 'Terminating' after node failure?

I have a deployment that deploys a single pod with a persistent volume claim. If I switch off the node it is running on, after a while k8s terminates the pod and tries to spin it up elsewhere. However the new pod cannot attach the volume (Multi-Attach error for volume "pvc-...").
I can manually delete the old 'Terminating' pod with kubectl delete pod <PODNAME> --grace-period=0 --force and then things recover.
Is there a way to get Kubernetes to force delete the 'Terminating' pods after a timeout or something? Tx.
According to the docs:
A Pod is not deleted automatically when a node is unreachable. The
Pods running on an unreachable Node enter the 'Terminating' or
'Unknown' state after a timeout. Pods may also enter these states when
the user attempts graceful deletion of a Pod on an unreachable Node.
The only ways in which a Pod in such a state can be removed from the
apiserver are as follows:
The Node object is deleted (either by you, or by the Node Controller).
The kubelet on the unresponsive Node starts responding, kills the Pod and removes the entry from the apiserver.
Force deletion of the Pod by the user.
So I assume you are not deleting nor draining the node that is being shut down.
In general I'd advice to ensure any broken nodes are deleted from the node list and that should make Terminating pods to be deleted by controller manager.
Node deletion normally happens automatically, at least on kubernetes clusters running on the main cloud providers, but if that's not happening for you than you need a way to remove nodes that are not healthy.
Use Recreate in .spec.strategy.type of your Deployment. This tell Kubernetes to delete the old pods before creating new ones.
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy

k8s - Keep pod up even if sidecar crashed

I have a pod with a sidecar. The sidecar does file synchronisation and is optional. However it seems that if the sidecar crashes, the whole pod becomes unavailable. I want the pod to continue serving requests even if its sidecar crashed. Is this doable?
Set pod's restartPolicy to Never. It will prevent the kubelet from restarting your pod even if one of your containers failed.
If a Pod is running and has two Containers. Container 1 exits with failure. If the restartPolicy it set to Never, the kubelet will not restart Container and the Pod's phase stays Running.
Reference

Kubernetes StatefulSet restart waits if one or more pods are not ready

I have a statefulset which constitutes of multiple pods. I have a use case where I need to invoke restart of the STS, I run this: kubectl rollout restart statefulset mysts
If I restart the statefulset at a time when one or more pods are in not-ready state, the restart action get queued up. Restart takes effect only after all the pods become ready. This could take long depending on the readiness threshold and the kind of issue the pod is facing.
Is there a way to force restart the statefulset, wherein I don't wait for pods to become ready? I don't want to terminate/delete the pods instead of restarting statefulset. A rolling restart works well for me as it helps avoid outage of the application.

Does a Kubernetes POD with restart policy always have to be under the auspice of a controller to work?

If I create a POD manifest (pod-definition.yaml) and set the restartPolicy: Always does that Pod also need to be associated with any controller (i.e., a Replicaset or Deployment)? The end goal here it to auto-start the container in the Pod should it die. Without a Pod being associated with a controller will that container automatically restart? What happens if the Pod has only one container?
The documentation is not clear here but it lead me to believe that the Pod must be under a controller for this to work, i.e., if you implicitly create a 8Ks object and specify a restart policy of Never you'll get a pod. If you specify always (the default) you'll get a deployment.
Pod without a controller(deployment, replication controller etc) and only with restartPolicy will not be restarted/rescheduled if the node(to be exact the kubelet on that node) where its running dies or drained or rebooted or for some other reason pod is evicted from the node. If the node is in good state and for some reason pod crashes it will be restarted on the same node without the need of a controller.
The reason is pod restartPolicy is handled by kubelet i.e pod is restarted by kubelet of the node.Now if the node dies kubelet is also dead and can not restart the pod. Hence you need to have a controller which will restart it in another node.
From the docs
restartPolicy only refers to restarts of the Containers by the kubelet
on the same node
In short if you want pods to survive a node failure or a kubelet failure of a node you should have a higher level controller.