I have a kubernetes cluster running. Inside this I have python application running as a pod.
This app is talking to rabbitmq which is also one of the pods of the same cluster. So the whole connection is done using internal IPs.
The problem is I have a service for rabbitmq, so I am providing the service IP for connection. It looks something like this ( with pika )
credentials = pika.PlainCredentials(rabbituser, rabbitpass)
connection = pika.BlockingConnection(pika.ConnectionParameters(host=QHost, port=80, credentials=credentials))
channel = connection.channel()
As you can see the connection I am trying to make is to port 80 which is open in the k8s service.
Now it should redirect it to respective pod which is rabbitmq, but it is trying to connect to port 5672
That means, error is because it is trying to connect to service-ip:5672, which is not there.
I want it to forward the request to the rabbit mq pod where the service is pointed.
I hope that I have explained main things.
Please ask if more details required.
Thanks
Related
I know the below information is not enough to trace the issue but still, I want some solution.
We have Amazon EKS cluster.
Currently, we are facing the reachability of the Kafka pod issue.
Environment:
Total 10 nodes with Availability zone ap-south-1a,1b
I have a three replica of the Kafka cluster (Helm chart installation)
I have a three replica of the zookeeper (Helm chart installation)
Kafka using external advertised listener on port 19092
Kafka has service with an internal network load balancer
I have deployed a test-pod to check reachability of Kafka pod.
we are using cloud-map based DNS for advertized listener
Working:
When I run telnet command from ec2 like telnet 10.0.1.45 19092. It works as expected. IP 10.0.1.45 is a loadbalancer ip.
When I run telnet command from ec2 like telnet 10.0.1.69 31899. It works as expected. IP 10.0.1.69 is a actual node's ip and 31899 is nodeport.
Problem:
When I run same command from test-pod. like telnet 10.0.1.45 19092. It works sometime and sometime it will gives an error like telnet: Unable to connect to remote host: Connection timed out
The issue is something related to kube-proxy. we need help to resolve this issue.
Can anyone help to guide me?
Can I restart kube-proxy? Does it affect other pods/deployments?
I believe this problem is caused by AWS's NLB TCP-only nature (as mentioned in the comments).
In a nutshell, your pod-to-pod communication fails when hairpin is needed.
To confirm this is the root cause, you can verify that when the telnet works, kafka pod and client pod are not in the same EC2 node. And when they're in the same EC2 server, the telnet fails.
There are (at least) two approaches to tackle this issue:
Use K8s internal networking - Refer to k8s Service's URL
Every K8s service has its own DNS FQDN for internal usage (meaning using k8s network only, without reaching the LoadBalancer and come back to k8s again). You can just telnet this instead of the NodePort via the LB.
I.e. let's assume your kafka service is named kafka. Then you can just telnet kafka.svc.cluster.local (on the port exposed by kafka service)
Use K8s anti-affinity to make sure client and kafka are never scheduled in the same node.
Oh and as indicated in this answer you might need to make that service headless.
I just setup a new EKS cluster (latest version available, using three default AMI).
I deployed a Redis instance in it as a Kubernetes service and exposed it. I can access the Redis database through internal DNS like : mydatabase.redis (it's deployed in the redis namespace). In another pod I can connect to my Redis database, however sometimes the connection takes more than 10 seconds.
It's doesn't seem to be a DNS resolution issue as host mydatabase.redis responds immediately with the service IP address. However when I try to connect to it, for example: nc mydatabase.redis 6379 -v it sometimes connects instantly and sometimes takes more than 10 seconds.
All my services are impacted, I don't know why. I didn't change any settings in my cluster this a basic EKS cluster.
How can I debug this?
I have a cluster with 3 nodes. In each node i have a frontend application running in a Pod and backend application running in a separate Pod.
I send data from the frontend application to the backend application, to do this i utilise the Cluster IP Service and k8 dns resource.
I also have a function in my frontend where i send data to a separate service unrelated to my k8s cluster. I send this data using a standard AJAX request to a url with a payload i.e http://my-seperate-service-unrelated-tok8.com.
All of this works correctly and the cluster operates as i want. - i have this cluster deployed to GKE.
I now want to run this cluster local using minikube, which i have been able to do, however, when i am running locally i do not want to send data to my external service - instead i want to forward it to either a new Pod i will create or just not send it.
The problem here is i need a proxy to intercept outgoing network traffic, check if the outgoing request is the request i am looking for and if it is then redirect it.
I understand each node running in a cluster has a kube-proxy service running within the node - which is used to forward traffic to the relevant services in the cluster.
I would like to either extend this service, or create a new proxy service where i can listen for outgoing traffic to a specific url and redirect it.
Is this possible to do in a k8 cluster? I assume there is a Service i can create to listen for all outgoing requests and redirect specific requests based on rules i set.
I wasn’t sure if k8 clusters have a Service already configured i can simply add to - that’s why i thought of the kube-proxy, would anyone be able to advice on this?
I wanted to add this proxy so i don’t have to change my code when its ran locally in minikube or deployed to GKE.
Any help is greatly appreciated. Thanks!
I did a tool that help you to forward a service to another service,local port, service from other cluster, etc...
This way you can have exactly your same urls, ports and code... but the underlying services gets "replaced", if I understand correctly this is what you are looking for.
Here is a quick example of an stage service being replaced with my local 3000 port
This is the repository with more info and examples: linker-tool
If you are interested let me know if you need help or have any question.
Is it possible to run PostgreSQL as a service inside an OpenShift cluster and get external traffic to it via an exposed route (the recommended default way for communicating from the outside)?
The OpenShift 3.9 documentation states this:
A router is configured to accept external requests and proxy them
based on the configured routes. This is limited to
HTTP/HTTPS(SNI)/TLS(SNI), which covers web applications.
PostgreSQL can to SSL and it can be configured to listen on port 443 (HTTPS), but I think it cannot do SNI yet. I would only run a single pod behind the service, so load balancing should not cause issues. If feasible, I would expect to connect into the cluster and to the service with psql -h ....
You can create a LoadBalancer service which will create a load balancer dedicated to just your pods, and this load balancer accepts TCP based traffic: https://docs.openshift.com/container-platform/3.9/admin_guide/tcp_ingress_external_ports.html#unique-external-ips-ingress-traffic-configure-service
So something like
oc expose dc postgres --type=LoadBalancer --name=postgres-lb
Would get you a publicly accessible PostgreSQL DB
I have come to the (preliminary) conclusion that OpenShift port forwarding (oc port-forward) offers the best way (well ... :) forward in this situation.
I'm trying to use Istio in a K8s 1.6 cluster on AWS.
I have a Kafka pod/service running the old fashion way, with a "kafka-zk-broker-kafka.dev" service without IP, so the kafka-zk-broker-kafka.dev service (I'm in the dev namespace) resolve to the internal name of my 3 Kafka pods. This is working great.
~ # nslookup kafka-zk-broker-kafka.dev
Name: kafka-zk-broker-kafka.dev
Address 1: 10.33.0.11 kafka-zk-kafka-0.kafka-zk-broker-kafka.dev.svc.cluster.local
Address 2: 10.38.96.16 kafka-zk-kafka-2.kafka-zk-broker-kafka.dev.svc.cluster.local
Address 3: 10.40.128.13 kafka-zk-kafka-1.kafka-zk-broker-kafka.dev.svc.cluster.local
I deployed a kafka producer application, using Istio sidecart as it is also exposing a gRPC port for internal uses.
Deployment went fine, but my application can't connect to to the "kafka-broker" service. DNS resolution is OK, but I can't reach the service port (TCP:9092) using either kafka client or telnet.
What I understand is that, when the Istio (envoy) sidecart is deployed, everything out of the POD is going through the Envoy proxy...
So the envoy proxy does not know how to reach regular services ?
Am I missing something ? is there a way to mix Istio/Envoy with regular k8s services ?
What you are doing should work, but I think you're running into this known bug: https://github.com/istio/issues/issues/37