I have a 3 nodes Rancher RKE custom cluster deployed on Rocky Linux VMs on vSphere.
I deployed MetalLB on the cluster and define IP pool from my node network.
When I create a LoadBalancer service everything looks fine and I'm getting external IP address from the pool, however I cannot reach this IP address from the node ip network, I can't even reach it from the nodes themselves, when I try to curl to the external IP address from one of the nodes I'm getting no where (No route to host).
Curl to the cluster IP or to the pod itself works fine.
Also if I create a NodePort service to the pod I can reach it with no issue from outside the cluster.
Any ideas?
Related
I'm managing a small Kubernetes cluster on Azure with Postgres. This cluster is accessible through an Nginx controller with a static IP.
The ingress routes to a ClusterIP to a pod which uses a Postgres instance. This Postgres instance has all IPs blocked, with a few exceptions for my own IP and the static IP of the ingress.
This worked well until I pushed an update this morning, where to my amazement I see in the logs an error that the pods IP address differs from the static ingress IP, and it has a permission error because of it.
My question: how is it possible that my pod, with ClusterIP, has a different outer IP address than the ingress static IP I assigned it?
Note that the pod is easily reached, through the Ingress.
Ingresses and Services handle only incoming pod traffic. Pod outgoing traffic IP depends on Kubernetes networking implementation you use. By default all outgoing connections from pods are source NAT-ed on node level which means pod will have an IP of node which it runs on. So you might want to allow worker node IP addresses in your Postgres.
I have 5 VPS with a public network interface for each, for which I have configured a VPN.
3 nodes are Kubernetes masters where I have set the Kubelet --node-ip flag as their private IP address.
One of the 3 nodes have a HAProxy load balancer for the Kubernetes masters, listening on the private IP, so that all the nodes used the private IP address of the load balancer in order to join the cluster.
2 nodes are Kubernetes workers where I didn't set the Kubelet --node-ip flag so that their node IP is the public address.
The cluster is healthy and I have deploy my application and its dependencies.
Now I'd like to access the app from the Internet, so I've deployed a edge router and created a Kubernetes Service with the type LoadBalancer.
The service is well created but never takes the worker nodes' public IP addresses as EXTERNAL-IP.
Assigning the IP addresses manually works, but obviously want that to be automatic.
I have read about the MetalLb project, but it doesn't seem to fit in my case as it is supposed to have a range of IP addresses to distribute, while here I have one public IP address per node, and not in the same range.
So who can I configure Kubernetes so that my Service of type LoadBalancer gets automatically the public IP addresses as EXTERNAL-IP?
I finally can answer myself in two times.
Without an external Load Balancer
Firstly, in order to solve the problem from my question, the only way I found which worked quite well was to set the externalIPs of my LoadBalancer service with the IP addresses of the Kubernetes worker nodes.
Those nodes were running Traefik and therefor had it listening on ports 80 and 443.
After that, I've created as many A DNS entries as I have Kubernetes worker nodes, pointing each to the Kubernetes respective worker node public IP address. This setup makes the DNS server returning the list of IP addresses, in a random order, and then the web browser will take care of trying the first IP address, then the second one if the first is down and so on.
The downside of this, is when you want to drain a node for maintenance, or when it crashes, the web browser will wast time trying to reach it until it tries the next IP address.
So here come the second option: External Load balancer.
With an external Load Balancer
I took another VPS where I've installed HAproxy and configured a SSL passthrough of the Kubernetes API port so that it load balancer the trafic to the master nodes, without terminating it.
With this solution, I removed the externalIPs field from my Service and I've installed MetalLB with a single IP address configured with this manifest:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: staging-public-ips
protocol: layer2
addresses:
- 1.2.3.4/32
When the LoadBalancer Service is created, MetalLB assigns this IP address and calls the Kubernetes APIs accordingly.
This has solved my issue to integrate my Kubernetes cluster with Gitlab.
WARNING: MetalLB will assign only once the IP address so that if you have a second LoadBalancer Service, it will remain in Pending state forever, until you give a new IP address to MetalLB.
Every where its mentioned "cluster type of service makes pod accessible within a Kubernetes cluster"
Does it mean, after adding cluster service to a POD, then that POD can be connected only using cluster service IP of POD, we will not be able to connect POD using the IP of POD generated before adding cluster ?
Please help me understanding, am learning Kubernetes so.
When a service is created using the ClusterIP then that service is accessible only inside the cluster as service IP's are virtual IP.
Although if you want to access the pod from outside using the service IP then you can use the nodeport or loadbalancer type service which will allow you to access the pod using the Node's IP or the loadbalancer's IP.
Main reason behind using services to access pod is that it give a fixed location (ClusterIP or service name) to access. Pod's can come an go but service IP will remain same.
I've deployed a hello-world application on my Kubernetes cluster. When I access the app via <cluster ip>:<port> in my browser I get the following webpage: hello-kuleuven app webpage.
I understand that from outside the cluster you have to access the app via the cluster IP and the port specified in the deployment file (which in my case is 30001). From inside the cluster you have to contact the master node with its local IP and another port number, in my case 10.111.152.164:8080.
My question is about the last line of the webpage:
Kubernetes listening in 443 available at tcp://10.96.0.1:443
Since, the service is already accessible from inside and outside the cluster by other ports and IP's, I'm not sure what this does.
The IP 10.96.0.1 is a cluster IP of kube-dns service. You can see it using
kubectl get svc -n kube-apiserver
Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service’s IP to resolve DNS names.
So every pod you deploy uses kube-dns service (ClusterIP 10.96.0.1) to resolve the dns names.
Read more about kube dns at kubernetes official document here
I have an app deployment called 'backend-app' running in pods that are on several different nodes. I also have a service that exposes the 'backend-app' to be accessed by other cluster internal pods as my 'frontend-app' pods.
If I use DNS to connect to the 'backend-app' from my different app deployment called 'frontend-app' will the requests be load balanced to each 'backend-app' pod on each node?
It sounds like a NodePort service will only connect to one node and not load balance my requests to others.
For each Service with type: NodePort a port is opened on all nodes (the same port on each). The port is open whether a pod of that service is running on a node or not. The load balancing is done among all pods of all nodes with no preference to a pod that happens to run on the same node to which you connected on the node port (if there is one there at all).
Services automatically load balance to the pods that are assigned to them. See https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service.
The cluster IP address that is created with a service is the IP address that will automatically select an available pod on any node that is running the pod. You can find the service's cluster IP address by using a DNS lookup.
My confusion came because I didn't realise the cluster IP address was associated with a service, not with a specific Pod.
I'm currently not sure about how NodePort's work with this though.