I'm trying to setup Helm chart with some dependencies like MySQL, RabbitMQ and so on and when my actual microservice is starting, the moment when first connection is established to MySQL from the microservice, both instantly crash.
It works with docker-for-desktop but with minikube it doesn't work.
I tried manually to get inside the pod (of microservice and others also) and to try to login to MySQL server (MySQL pod) and it still crashes without any exception
BUT
strange thing is that if I try to login with wrong credentials for first the time it doesn't crash, it shows me an error that wrong credentials are in question and after it if I try with correct ones, it succeeds!
If I try to login from MySQL pod inside MySQL server it logins correctly.
Curl to MySQL port returns version so it works like it should, only the login to MySQL from external pod is the problem.
Does anyone of you have an idea what's going on here?
Maybe you can wait for your application to have dependencies ready before requesting them.
You could use readiness probe and init containers to delay your application from starting.
Related
I discovered a strange behavior with K8s networking that can break some applications designs completely.
I have two pods and one Service
Pod 1 is a stupid Reverse Proxy (I don't know the implementation)
Pod 2 is a Webserver
The mentioned Service belongs to pod 2, the webserver
After the initial start of my stack I discovered that Pod 1 - the Reverse Proxy is not able to reach the webserver on the first attempt for some reason, ping is working fine and curl also.
Now I tried wget mywebserver inside of Pod 1 - Reverse Proxy and got back the following:
wget mywebserver
--2020-11-16 20:07:37-- http://mywebserver/
Resolving mywebserver (mywebserver)... 10.244.0.34, 10.244.0.152, 10.244.1.125, ...
Connecting to mywebserver (mywebserver)|10.244.0.34|:80... failed: Connection refused.
Connecting to mywebserver (mywebserver)|10.244.0.152|:80... failed: Connection refused.
Connecting to mywebserver (mywebserver)|10.244.1.125|:80... failed: Connection refused.
Connecting to mywebserver (mywebserver)|10.244.2.177|:80... connected.
Where 10.244.2.177 is the Pod IP of the Webserver.
The problem to me it seems is that the Reverse-Proxy does not try to trigger the attempt to forward the package twice, instead it only tries once where it fails like in the wget cmd above and the request gets dropped as the backed is not reachable due to fancy K8s IPtables stuff it seems...
If I configure the reverse-proxy not to use the Service DNS-name for load-off and instead use the Pod IP (10.244.2.177) everything is working fine and as expected.
I already tried this with a variety of CNI Providers like: Flannel, Calico, Canal, Weave and also Cilium as Kube-Proxy is not used with Cilium but all of them failed and all of them doing fancy routing nobody clearly understands out-of-the-box. So my question is how can I make K8s routing work immediately at this point? I already have reimplemented my whole stack to docker-swarm just to see if it works, and it does, flawlessly! So this issue has to do something with K8s routing scheme it seems.
Just to exclude misconfiguration from my side I also tried this with different ready-to-use K8s solutions like managed K8s from Digital-Ocean and or self-hosted RKE. All have the same behavior.
Does somebody maybe have a Idea what the problem might be and how to fix this behavior of K8s?
I might also be very useful to know what actually happens at the wget request, as this remains a mystery to me.
Many thanks in advance!
It turned out that I had several misconfigurations at my K8s Deployment.
I first removed ClusterIP: None as this leads to the behavior wget shows above at my question. Beside I've set app: and tier: wrong at my deployment. Anyways now everything is working fine and wget has a proper connection.
Thanks again
I have a web deployment and a mongoDB statefulset. The web deployment connects to the mongodb but once in a while a error may occur in the mongodb and it reboots and starts up. The connection from the web deployment to the mongodb never get restarted. Is there a way in the web deployment. If the mongodb pod restarts to restart the web pod as well?
Yes, you can use a liveness probe on your application container that probes your Mongo Pod/StatefulSet. You can configure it in such a way that it fails if it fails to TCP connect to your Mongo Pod/StatefulSet when Mongo crashes (Maybe check every second)
Keep in mind that with this approach you will have to always start your Mongo Pod/StatefulSet first.
The sidecar function described in the other answer should work too, only it would take a bit more configuration.
Unfortunately, there's no easy way to do this within Kubernetes directly, as Kubernetes has no concept of dependencies between resources.
The best place to handle this is within the web server pod itself.
The ideal solution is to update the application to retry the connection on a failure.
A less ideal solution would be to have a side-car container that just polls the database and causes a failure if the database goes down, which should cause Kubernetes to restart the pod.
I am new to kubernetes and trying to throw together a quick learning project, however I am confused on how to connect a pod to a local service.
I am storing some config in a ZooKeeper instance that I am running on my host machine, and am trying to connect a pod to it to grab config.
However I cannot get it to work, I've tried the magic "10.0.2.2" that I've read about, but that did not work. I also tried creating a service and endpoint, but again to no avail. Any help would be appreciated, thanks!
Also, for background I'm using minikube on macOS with the hyperkit vm-driver.
I have a Kubernetes cluster on AWS, set up with kops.
I set up a Deployment that runs an Apache container and a Service for the Deployment (type: LoadBalancer).
When I update the deployment by running kubectl set image ..., as soon as the first pod of the new ReplicaSet becomes ready, the first couple of requests to the service time out.
Things I have tried:
I set up a readinessProbe on the pod, works.
I ran curl localhost on a pod, works.
I performed a DNS lookup for the service, works.
If I curl the IP returned by that DNS lookup inside a pod, the first request will timeout. This tells me it's not an ELB issue.
It's really frustrating since otherwise our Kubernetes stack is working great, but every time we deploy our application we run the risk of a user timing out on a request.
After a lot of debugging, I think I've solved this issue.
TL;DR; Apache has to exit gracefully.
I found a couple of related issues:
https://github.com/kubernetes/kubernetes/issues/47725
https://github.com/kubernetes/ingress-nginx/issues/69
504 Gateway Timeout - Two EC2 instances with load balancer
Some more things I tried:
Increase the KeepAliveTimeout on Apache, didn't help.
Ran curl on the pod IP and node IPs, worked normally.
Set up an externalName selector-less service for a couple of external dependencies, thinking it might have something to do with DNS lookups, didn't help.
The solution:
I set up a preStop lifecycle hook on the pod to gracefully terminate Apache to run apachectl -k graceful-stop
The issue (at least from what I can tell), is that when pods are taken down on a deployment, they receive a TERM signal, which causes apache to immediately kill all of its children. This might cause a race condition where kube-proxy still sends some traffic to pods that have received a TERM signal but not terminated completely.
Also got some help from this blog post on how to set up the hook.
I also recommend increasing the terminationGracePeriodSeconds in the PodSpec so apache has enough time to exit gracefully.
I am following the instructions on how to setup vitess in kubernetes. I am using minikube 0.15 on my local machine (windows 10) running on virtualbox 5.1.12.
I have managed to get all the way to step 12 before I start seeing strange things happening.
When I run ./vtgate-up.sh everything starts fine, but the service stays in a pending state.
At first I didn't think anything of it until I went on to the next step of trying to install the guestbook client app.
After running ./guestbook-up.sh again everything went fine, no errors, but the service is again in a pending state, and I don't get an external endpoint.
I tried going on to the next step, but when I run kubectl get service guestbook I am suppose to get an expernal-ip, but I don't. The instructions say to wait a few minutes, but I have let this run for an hour and still nothing.
So here is where I am stuck. What do I do next?
It's normal that you can't get an external IP in this scenario since that gets created in response to the LoadBalancer service type, which does not work in Minikube.
For the vtgate service, it actually shouldn't matter since the client (the guestbook app) is inside Kubernetes and can use the cluster IP. For the guestbook, you could try to work around the lack of LoadBalancer support in Minikube to access the frontend from outside the cluster in a couple different ways:
Use kubectl port-forward to map a local port to a particular guestbook pod.
Or, change the guestbook service type to NodePort and access that port on your VM's IP address.