Kubernetes Version Upgrades and Downtime - kubernetes

I just tested Ranche RKE , upgrading kubernetes 13.xx to 14.xx , during upgrade , an already running nginx Pod got restarted during upgrade. Is this expected behavior?
Can we have Kubernetes cluster upgrades without user pods restarting?
Which tool supports un-intruppted upgrades?
What are the downtimes that we can never aviod? ( apart from Control plane )

The default way Kubernetes upgrades is by doing a rolling upgrade of the nodes, one at a time.
This works by draining and cordoning (marking the node as unavailable for new deployments) each node that is being upgraded so that there no pods running on that node.
It does that by creating a new revision of the existing pods on another node (if it's available) and when the new pod starts running (and answering to the readiness/health probes), it stops and remove the old pod (sending SIGTERM to each pod container) on the node that was being upgraded.
The amount of time Kubernetes waits for the pod to graceful shutdown, is controlled by the terminationGracePeriodSeconds on the pod spec, if the pod takes longer than that, they are killed with SIGKILL.
The point is, to have a graceful Kubernetes upgrade, you need to have enough nodes available, and your pods must have correct liveness and readiness probes (https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/).
Some interesting material that is worth a read:
https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime (specific to GKE but has some insights)
https://blog.gruntwork.io/zero-downtime-server-updates-for-your-kubernetes-cluster-902009df5b33

Resolved By configuring the container runtime on the hosts not to restart containers on docker restart.

Related

Helm upgrade custom microservice causes temporary downtime

during deployment of new version of application sequentially 4 pods are terminated and replaced by newer ones; but for those ~10minutes the app is hitting other microservice is hitting older endpoints causing 502/404 errors - anyone know of a way to deploy 4 new pods, then drain traffic from old ones to new ones and after all connections to prev ver are terminated, then terminate the old pods ?
This probably means you don't have a readiness probe set up? Because the default is already to only roll 25% of the pods at once. If you have a readiness probe, this will include waiting until the new pods are actually available and Ready but otherwise it only waits until they start.

Kubernetes cluster recovery after linux host reboot

We are still in a design phase to move away from monolithic architecture towards Microservices with Docker and Kubernetes. We did some basic research on Docker and Kubernetes and got some understanding. We still have couple of open question considering we will be creating K8s cluster with multiple Linux hosts (due to some reason we can't think about Cloud right now) .
Consider a scenario where we have K8s Cluster spanning over multiple linux hosts (5+).
1) If one of the linux worker node crashes and once we bring it back, does enabling kubelet as part of systemctl in advance will be sufficient to bring up required K8s jobs so that it be detected by master again?
2) I believe once worker node is crashed (X pods), after the pod eviction timeout master will reschedule those X pods into some other healthy node(s). Once the node is UP it won't do any deployment of X pods as master already scheduled to other node but will be ready to accept new requests from Master.
Is this correct ?
Yes, should be the default behavior, check your Cluster deployment tool.
Yes, Kubernetes handles these things automatically for Deployments. For StatefulSets (with local volumes) and DaemonSets things can be node specific and Kubernetes will wait for the node to come back.
Better to create a test environment and see/test the failure scenarios

Kubernetes Deployment with Zero Down Time

As a leaner of Kubernetes concepts, their working, and deployment with it. I have a couple of cases which I don't know how to achieve. I am looking for advice or some guideline to achieve it.
I am using the Google Cloud Platform. The current running flow is described below. A push to the google source repository triggers Cloud Build which creates a docker image and pushes the image to the running cluster nodes.
Case 1: Now I want that when new pods are up and running. Then traffic is routed to the new pods. Kill old pod but after each pod complete their running request. Zero downtime is what I'm looking to achieve.
Case 2: What will happen if the space of running pod reaches 100 and in the Debian case that the inode count reaches full capacity. Will kubernetes create new pods to manage?
Case 3: How to manage pod to database connection limits?
Like the other answer use Liveness and Readiness probes. Basically, a new pod is added to the service pool then it will only serve traffic after the readiness probe has passed. The old pod is removed from the Service pool, then drained and then terminated. This happens on a rolling fashion one pod at a time.
This really depends on the capacity of your cluster and the ability to schedule pods depending on the limits for the containers in them. For more about setting up limits for containers refer to here. In terms of the inode limit, if you reach it on a node, the kubelet won't be able to run any more pods on that node. The kubelet eviction manager also has a mechanism in where evicts some pods using the most inodes. You can also configure your eviction thresholds on the kubelet.
This would be more a limitation at the OS level combined your stateful application configuration. You can keep this configuration in a ConfigMap. And for example in something for MySql the option would be max_connections.
I can answer case 1 since Ive done it myself.
Use Deployments with readinessProbes & livelinessProbes

Configure Kubernetes StatefulSet to start pods first restart failed containers after start?

Basic info
Hi, I'm encountering a problem with Kubernetes StatefulSets. I'm trying to spin up a set with 3 replicas.
These replicas/pods each have a container which pings a container in the other pods based on their network-id.
The container requires a response from all the pods. If it does not get a response the container will fail. In my situation I need 3 pods/replicas for my setup to work.
Problem description
What happens is the following. Kubernetes starts 2 pods rather fast. However since I need 3 pods for a fully functional cluster the first 2 pods keep crashing as the 3rd is not up yet.
For some reason Kubernetes opts to keep restarting both pods instead of adding the 3rd pod so my cluster will function.
I've seen my setup run properly after about 15 minutes because Kubernetes added the 3rd pod by then.
Question
So, my question.
Does anyone know a way to delay restarting failed containers until the desired amount of pods/replicas have been booted?
I've since found out the cause of this.
StatefulSets launch pods in a specific order. If one of the pods fails to launch it does not launch the next one.
You can add a podManagementPolicy: "Parallel" to launch the pods without waiting for previous pods to be Running.
See this documentation
I think a better way to deal with your problem is to leverage liveness probe, as described in the document, rather than delay the restart time (not configurable in the YAML).
Your pods respond to the liveness probe right after they are started to let Kubernetes know they are alive, which prevents them from being restarted. Meanwhile, your pods keep ping others until they are all up. Only when all your pods are started will serve the external requests. This is similar to creating a Zookeeper ensemble.

GKE, automatic restart of stuck node

Sometimes a node backing GKE cluster goes down, with NotReady status:
$ kubectl get nodes
NAME STATUS AGE VERSION
gke-my-pool-f8045547-60gw Ready 10d v1.6.2
gke-my-pool-f8045547-7c7e NotReady 10d v1.6.2
Node can stuck for days in NotReady, until I manually restart it.
I have a Health check for my pods, so all of them go to other nodes, but the problem that this stale node still has GCE disks attached. So some of pods are unable to start on any of other nodes, until I manually detach disks (or restart stale node).
This basically kills whole idea of Kubernetes, because it this happens few times a day, so I have to babysit it whole day. Is there any way to configure Kubernetes or GCE to automate this? Most simple way would be automatic restart of NotReady nodes, but it seems that there no way to configure health check for nodes itself. Another option would be automatic unmount of disks, when it requested from another machine, but I don't see any way to configure that too.
GKE has a node auto-repair functionality that will monitor the node's health status and trigger an automatic repair event (currently a node recreation for NotReady nodes). It's currently in Beta, but you can try it: https://cloud.google.com/container-engine/docs/node-auto-repair