I have a deployment with one pod with my custom image. After executing kubectl create -f deployment.yaml, this pod becomes running. I see that everything is fine and it has "running" state in kubectl's output. But, i have one initialization script to start Apache Tomcat, it takes around 40-45 seconds to execute it and up server inside.
I also have load balancer deployment with nginx. Nginx redirects incoming requests to Apache Tomcat via proxy_pass. When i scale my deployment for 2 replicas and shut down one of them, sometimes application becomes stuck and freezing.
I feel that load balancing by k8s works not correctly, k8s is trying to use pod, which is initializing by script right now.
How can i tell k8s that pod in deployment hasn't been initialized and not to use it until it becomes totally up?
If I understand correctly mostly your problem is related to the application not being ready to accept requests because your initialization script hasn’t finished.
For that situation, you can easily setup different types of probes, such as liveliness and readiness. Such a solution would be useful, as your application wouldn’t be considered ready to accept requests unless the whole pod would start up and signal that it is alive.
Here you can read more about it: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
Related
So my setup on kubernetes is basically an external nginx load balancer that sends traffic for virtual hosts across the nodes.
The network runs all docker containers, 10 instances of a front end pod which is a compiled angular app, 10 instances of a pod that has two containers, a “built” image of a symfony app with a phpfpm container dedicated for each pod, an external mysql server on the local network which runs a basic docker container, 10 cdn pods which simply run an nginx server to pick up static content requests, 10 pods that run a socket chat application via nginx, a dedicated network of open vidu servers, and it also runs php fpm pods for multiple cron jobs.
All works fine and dandy until I say update the front end image and do an update to the cluster. The pods all update no problem but I end up with a strange issue of pages not loading, or partially loading, or somehow a request for the backend pods ending up not loading the http request or somehow loading from frontend pods. It’s really quite random.
The only way to get it back up again is to destroy every deployment and fire them up again and I’ve no idea what is causing it. Once it’s all restarted it all works again.
Just looking for idea on what it could be , anyone experienced this behaviour before?
I have configured my ingress controller with nginx-ingress hashing and I define HPA for my deployments. When we do load testing we hit a problem on the newly created pods that aren't warmed up enough and while the load balancing shifts immediately target portion of the traffic the latency spikes and service is choking. Is there a way to define some smooth load rebalancing that would rather move the traffic gradually and thus warm up the service in more natural way ?
Here is an example effect we see now:
At glance I see 2 possible reasons for that behaviour:
I think there is a chance that you are facing the same problem as encountered in this question: Some requests fails during autoscaling in kubernetes. In that case, Nginx was sending requests to Pods that were not completely ready. To solve this you can configure a Readiness Probe. Personally, I configure my Readiness Probes to send a http request to a /health endpoint of my services.
There is a chance however that your application naturally performs slowly during the first requests, usually because of caching or some other operation that needs to be done at the beginning of its life. I encountered this problem in a Django+Gunicorn app where the Gunicorn only started my app after the first request. To solve this I used a PostStart Container Hook which sends a request to my app right after the container is created. Here is an example of its use. You may also have a look at this question: Kubernetes Pod warm-up for load balancing.
I have Consul running in my cluster and each node runs a consul-agent as a DaemonSet. I also have other DaemonSets that interact with Consul and therefore require a consul-agent to be running in order to communicate with the Consul servers.
My problem is, if my DaemonSet is started before the consul-agent, the application will error as it cannot connect to Consul and subsequently get restarted.
I also notice the same problem with other DaemonSets, e.g Weave, as it requires kube-proxy and kube-dns. If Weave is started first, it will constantly restart until the kube services are ready.
I know I could add retry logic to my application, but I was wondering if it was possible to specify the order in which DaemonSets are scheduled?
Kubernetes itself does not provide a way to specific dependencies between pods / deployments / services (e.g. "start pod A only if service B is available" or "start pod A after pod B").
The currect approach (based on what I found while researching this) seems to be retry logic or an init container. To quote the docs:
They run to completion before any app Containers start, whereas app Containers run in parallel, so Init Containers provide an easy way to block or delay the startup of app Containers until some set of preconditions are met.
This means you can either add retry logic to your application (which I would recommend as it might help you in different situations such as a short service outage) our you can use an init container that polls a health endpoint via the Kubernetes service name until it gets a satisfying response.
retry logic is preferred over startup dependency ordering, since it handles both the initial bringup case and recovery from post-start outages
I have a Kubernetes cluster on AWS, set up with kops.
I set up a Deployment that runs an Apache container and a Service for the Deployment (type: LoadBalancer).
When I update the deployment by running kubectl set image ..., as soon as the first pod of the new ReplicaSet becomes ready, the first couple of requests to the service time out.
Things I have tried:
I set up a readinessProbe on the pod, works.
I ran curl localhost on a pod, works.
I performed a DNS lookup for the service, works.
If I curl the IP returned by that DNS lookup inside a pod, the first request will timeout. This tells me it's not an ELB issue.
It's really frustrating since otherwise our Kubernetes stack is working great, but every time we deploy our application we run the risk of a user timing out on a request.
After a lot of debugging, I think I've solved this issue.
TL;DR; Apache has to exit gracefully.
I found a couple of related issues:
https://github.com/kubernetes/kubernetes/issues/47725
https://github.com/kubernetes/ingress-nginx/issues/69
504 Gateway Timeout - Two EC2 instances with load balancer
Some more things I tried:
Increase the KeepAliveTimeout on Apache, didn't help.
Ran curl on the pod IP and node IPs, worked normally.
Set up an externalName selector-less service for a couple of external dependencies, thinking it might have something to do with DNS lookups, didn't help.
The solution:
I set up a preStop lifecycle hook on the pod to gracefully terminate Apache to run apachectl -k graceful-stop
The issue (at least from what I can tell), is that when pods are taken down on a deployment, they receive a TERM signal, which causes apache to immediately kill all of its children. This might cause a race condition where kube-proxy still sends some traffic to pods that have received a TERM signal but not terminated completely.
Also got some help from this blog post on how to set up the hook.
I also recommend increasing the terminationGracePeriodSeconds in the PodSpec so apache has enough time to exit gracefully.
Basic info
Hi, I'm encountering a problem with Kubernetes StatefulSets. I'm trying to spin up a set with 3 replicas.
These replicas/pods each have a container which pings a container in the other pods based on their network-id.
The container requires a response from all the pods. If it does not get a response the container will fail. In my situation I need 3 pods/replicas for my setup to work.
Problem description
What happens is the following. Kubernetes starts 2 pods rather fast. However since I need 3 pods for a fully functional cluster the first 2 pods keep crashing as the 3rd is not up yet.
For some reason Kubernetes opts to keep restarting both pods instead of adding the 3rd pod so my cluster will function.
I've seen my setup run properly after about 15 minutes because Kubernetes added the 3rd pod by then.
Question
So, my question.
Does anyone know a way to delay restarting failed containers until the desired amount of pods/replicas have been booted?
I've since found out the cause of this.
StatefulSets launch pods in a specific order. If one of the pods fails to launch it does not launch the next one.
You can add a podManagementPolicy: "Parallel" to launch the pods without waiting for previous pods to be Running.
See this documentation
I think a better way to deal with your problem is to leverage liveness probe, as described in the document, rather than delay the restart time (not configurable in the YAML).
Your pods respond to the liveness probe right after they are started to let Kubernetes know they are alive, which prevents them from being restarted. Meanwhile, your pods keep ping others until they are all up. Only when all your pods are started will serve the external requests. This is similar to creating a Zookeeper ensemble.