I have a question with Kubernetes when deploying a new version.
My YAML configuration of Kubernetes has the RollingUpdate strategy. The problem comes when it comes to changing versions this way. If I have a php-fpm that is performing an action, does that action get lost? In case it is just changing that pod to the new version.
My main question is if Kubernetes with this strategy takes into consideration if the pod is being used and if so if it waits until he finishes what it has to do and changes it.
Thanks!
If something is dropping your sessions it would be a bug. Generally speaking, if you have a 'Service' that forwards to multiple backend replicas when you do an update it happens one replica at a time. Something like this:
New pod created.
Wait for the new pod to be ready and serviceable.
Put the new pod in the Service pool.
Remove the old pod from the Service pool
Drain old pod. Don't take any more incoming connections and wait for connections to close.
Take down the old pod.
Related
Kubernetes tends to assume apps are small/lightweight/stateless microservices which can be stopped on one node and restarted on another node with no downtime.
We have a slow starting (20min) legacy (stateful) application which, once run as a set of pod should not be rescheduled without due cause. The reason being all user sessions will be killed and the users will have to login again. There is NO way to serialize the sessions and externalize them. We want 3 instances of the pod.
Can we tell k8s not to move a pod unless absolutely necessary (i.e. it dies)?
Additional information:
The app is a tomcat/java monolith
Assume for the sake of argument we would like to run it in Kubernetes
We do have a liveness test endpoint available
There is no benefit, if you tell k8s to use only one pod. That is not the "spirit" of k8s. In this case, it might be better to use a dedicated machine for your app.
But you can assign a pod to a special node - Assigning Pods to Nodes. The should be necessary only, when special hardware requirements are needed (e.g. the AI-microservice needs a GPU, which is only on node xy).
k8s don't restart your pod for fun. It will restart it, when there is a reason (node died, app died, ...) and I never noticed a "random reschedule" in a cluster. It is hard to say, without any further information (like deployment, logs, cluster) what exactly happened to you.
And for your comment: There are different types of recreation, one of them starts a fresh instance and will kill the old one, when the startup was successfully. Look here: Kubernetes deployment strategies
All points together:
Don't enforce a node to your app - k8s will "smart" select the node.
There are normally no planned reschedules in k8s.
k8s will recreate pods only, if there is a reason. Maybe your app didn't answer on the liveness-endpoint? Or someone/something deleting your pod?
Is it possible to plug and play storage to an active pod without restarting the pod? I want to bind a new storage to a running pod without restarting the pod. Does Kubernetes support this?
Most things in a Pod are immutable. In particular if you look at the API definition of a PodSpec it says in part (emphasis mine)
container: List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated.
Typically you don't directly work with Pods; you work with a higher-level controller like a Deployment. There you can edit these things, and it reacts by creating new Pods with the new pod spec and then deleting the old Pods.
Also remember that sometimes the cluster itself will delete or restart a Pod (if its Node is over capacity or fails, for example) and you don't have any control over this. It's better to plan for your Pods to be periodically restarted than to try to prevent it.
I created a job in my kubernetes cluster, the job takes a long time to finish, I decided to cancel it, so I deleted the job, but I noticed the associated pod is NOT automatically deleted. Is this the expected behavior? why is it not consistent with deployment deletion? Is there a way to make pod automatically deleted?
If you're deleting a deployment, chances are you don't want any of the underlying pods, so it most likely forcefully deletes the pods by default. Also, the desired state of pods would be unknown.
On the other hand if you're deleting a pod, it doesn't know what kind of replication controller may be attached to it and what it is doing next. So it signals a shutdown to the container so that it can perhaps clean up gracefully. There may be processes that are still using the pod, like a web request etc. and it would not be good to kill their request if it may take a second to complete. This is what happens if you may be scaling up your pods or rolling out a new deployment, and you don't want any of the users to experience any downtime. This is in fact one of the benefits of Kubernetes, as opposed to a traditional application server which requires you to shutdown the system to upgrade (or to play with load balancers to redirect traffic) which may negatively affect users.
As a leaner of Kubernetes concepts, their working, and deployment with it. I have a couple of cases which I don't know how to achieve. I am looking for advice or some guideline to achieve it.
I am using the Google Cloud Platform. The current running flow is described below. A push to the google source repository triggers Cloud Build which creates a docker image and pushes the image to the running cluster nodes.
Case 1: Now I want that when new pods are up and running. Then traffic is routed to the new pods. Kill old pod but after each pod complete their running request. Zero downtime is what I'm looking to achieve.
Case 2: What will happen if the space of running pod reaches 100 and in the Debian case that the inode count reaches full capacity. Will kubernetes create new pods to manage?
Case 3: How to manage pod to database connection limits?
Like the other answer use Liveness and Readiness probes. Basically, a new pod is added to the service pool then it will only serve traffic after the readiness probe has passed. The old pod is removed from the Service pool, then drained and then terminated. This happens on a rolling fashion one pod at a time.
This really depends on the capacity of your cluster and the ability to schedule pods depending on the limits for the containers in them. For more about setting up limits for containers refer to here. In terms of the inode limit, if you reach it on a node, the kubelet won't be able to run any more pods on that node. The kubelet eviction manager also has a mechanism in where evicts some pods using the most inodes. You can also configure your eviction thresholds on the kubelet.
This would be more a limitation at the OS level combined your stateful application configuration. You can keep this configuration in a ConfigMap. And for example in something for MySql the option would be max_connections.
I can answer case 1 since Ive done it myself.
Use Deployments with readinessProbes & livelinessProbes
I'm trying to set up a stateful Apache Flink application in Kubernetes and I need to save the current state in case of a disruption, such as someone deleting the pod or it being rescheduled due to cluster resizing.
I added a preStop hook to the container that accomplishes this behaviour, but when I delete a pod using kubectl delete pod it spins up a new Pod before the old one terminates.
Guides such as this one use the Recreate update strategy to make sure only one pod runs at a time. This works fine in case of updating a deployment, but it does not cover disruptions like I described above. I also tried to set spec.strategy.rollingUpdate.maxSurge to 0 but that made no difference.
Is it possible to configure my Deployment in such a way that no pod ever starts before another one is terminated, or do I need to switch to StatefulSets?
I agree with #Cosmic Ossifrage as StatefulSets make it easy to achieve your goal. Each Pod in StatefulSets is represented with unique, persistent identities and stable hostnames that Kubernetes Engine maintains regardless of where they are scheduled.
Therefore, StatefulSets are deployed in sequential order and are terminated in reverse ordinal order assuming that Kubernetes StatefulSet controller removes one Pod each time after complete deletion of previous one as well.