Ive been trying to setup LoadBalancer service which can preserve source info arcording to this document.
I have tried 2 cases:
First case is that I simply create a service (call it svcA) type LoadBalancer with externalTrafficPolicy: Local and then give it an externalIP = the master node IP. The backing up pod of the service is on another worker node
As in the document describe, the controller will healthcheck across all nodes in cluster to check which node has my pods, and traffic shall be forwarded to those nodes only.
So I expect when my client send traffic with destination is the externalIp of svcA, the traffic would then be forwarded
to the worker node. But it actually doesnt.
However, I can reach the pod through svcA when I change its externalIP to worker node's IP and set my traffic destination to that IP only (this is expected as the "Local" means only routing to pod within the node)
Second case is that I change svcA type to nodePort and the same result with previous case happened, with and without externalIPs
Is there might be misconfiguration or am I misunderstanding what the document describe here?
Cluster Info:
Kube version 1.19.7
Im not using any cloud provider, just fresh install on cluster of VM nodes
Thanks for helping :D
Related
The external IP is perfectly reachable from outside the cluster. It's perfectly reachable from all nodes within the cluster. However, when I try to telnet to the URL from a pod within the cluster that is not on the same node as a pod that is part of the service backend, the connection always times out.
The external IP is reachable by pods that run on the same node as a pod that is part of the service backend.
All pods can perfectly reach the cluster IP of the service.
When I set externalTrafficPolicy to Cluster, the pods are able to reach the external URL regardless of what node they're on.
I am using iptables proxying and kubernetes 1.16
I'm completely at a loss here as to why this is happening. Is someone able to shed some light on this?
From the official doc here,
service.spec.externalTrafficPolicy - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: Cluster (default) and Local. Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading.
The service could be either node-local or cluster-wide endpoints. When you define the externalTrafficPolicy as Local, it means node-local. So, other nodes are not able to reach it.
So, you will need to set the externalTrafficPolicy as Cluster instead.
I have a Kubernetes installation on-premise and it seems to be working fine.
I am now trying to install MetalLb to use load-balancer service.
Our network guy gave me IP ranges of 11.240.15.192/27 which can be used for Kubernetes cluster load-balancing service.
My cluster runs on 11.211.220.X and I have one master and three worker nodes.
My question is, what do I need to provide as IP range in config map of a MetalLb load balancer?
Do I need to physically attach call those IP to any of the nodes before MetalLb can use it to hand-ver IP addresses for my service?
These questions are never being answered.. All the setup either use MiniKube or installation in local network where 192.168.X.X range is fully available.
When I assigned 11.240.15.192-11.240.15.223 to configmap and created a service of type load balancer, it was still in External IP was still in Pending state for a while.
I then Applied changes manually to the service as follows:
...
spec:
type: LoadBalancer
externalIPs:
- 11.240.15.192
It still couldn't connect my sample nginx deployment on port 80
Then to experiment with it, I changed "ExternalIps" to one of the Kubernetes Node IP address and now I can access Nginx index page. This raises a big concern since I only have three worker nodes and I probably can run only three services on port 80 using up IP address of each node.
Can someone please guide me where exactly I need to make changes so that I can use whole range of IP addresses?
I guess you don't have to assign the external IP, It will be assigned automatically from the pool you assigned and in sequence.
I have bare metal Kubernetes cluster with haproxy ingress controller (daemon set) on external ip. Is it possible to restrict kube-proxy to route to local haproxy ingress pod?
To be more specific, I have 2 pods of haproxy ingress controller and use one external ip for them. As per my understanding, kube-proxy will be routing in round-robin to the pods. I didn't find any way to restrict this particular behaviour.
Set externalTrafficPolicy: Local in the NodePort Service.
This will make it so that traffic going to a node X will only go to the pod in node X. If there is no pod in node X the traffic will be dropped (but this should not be an issue since you're using a DaemonSet).
Another benefit is that this preserves the true source IP that haproxy sees. Without externalTrafficPolicy, it is possible that haproxy sees the source IP of another node instead of the original one, since nodes can proxy traffic.
More info here
I have 5 VPS with a public network interface for each, for which I have configured a VPN.
3 nodes are Kubernetes masters where I have set the Kubelet --node-ip flag as their private IP address.
One of the 3 nodes have a HAProxy load balancer for the Kubernetes masters, listening on the private IP, so that all the nodes used the private IP address of the load balancer in order to join the cluster.
2 nodes are Kubernetes workers where I didn't set the Kubelet --node-ip flag so that their node IP is the public address.
The cluster is healthy and I have deploy my application and its dependencies.
Now I'd like to access the app from the Internet, so I've deployed a edge router and created a Kubernetes Service with the type LoadBalancer.
The service is well created but never takes the worker nodes' public IP addresses as EXTERNAL-IP.
Assigning the IP addresses manually works, but obviously want that to be automatic.
I have read about the MetalLb project, but it doesn't seem to fit in my case as it is supposed to have a range of IP addresses to distribute, while here I have one public IP address per node, and not in the same range.
So who can I configure Kubernetes so that my Service of type LoadBalancer gets automatically the public IP addresses as EXTERNAL-IP?
I finally can answer myself in two times.
Without an external Load Balancer
Firstly, in order to solve the problem from my question, the only way I found which worked quite well was to set the externalIPs of my LoadBalancer service with the IP addresses of the Kubernetes worker nodes.
Those nodes were running Traefik and therefor had it listening on ports 80 and 443.
After that, I've created as many A DNS entries as I have Kubernetes worker nodes, pointing each to the Kubernetes respective worker node public IP address. This setup makes the DNS server returning the list of IP addresses, in a random order, and then the web browser will take care of trying the first IP address, then the second one if the first is down and so on.
The downside of this, is when you want to drain a node for maintenance, or when it crashes, the web browser will wast time trying to reach it until it tries the next IP address.
So here come the second option: External Load balancer.
With an external Load Balancer
I took another VPS where I've installed HAproxy and configured a SSL passthrough of the Kubernetes API port so that it load balancer the trafic to the master nodes, without terminating it.
With this solution, I removed the externalIPs field from my Service and I've installed MetalLB with a single IP address configured with this manifest:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: staging-public-ips
protocol: layer2
addresses:
- 1.2.3.4/32
When the LoadBalancer Service is created, MetalLB assigns this IP address and calls the Kubernetes APIs accordingly.
This has solved my issue to integrate my Kubernetes cluster with Gitlab.
WARNING: MetalLB will assign only once the IP address so that if you have a second LoadBalancer Service, it will remain in Pending state forever, until you give a new IP address to MetalLB.
What I Have: A Kubernetes Cluster on GCE with three Nodes. Lets suppose the master has the IP <MasterIP>. Additionally I have a Service within the cluster of Type NodePort which listens to the port <PORT>. I can access the service using <NodeIP>:<PORT>
What I would like to do: Access the service with <MasterIP>:<PORT> How can I forward the port from <MasterIP> to within the cluster? in other words: <MasterIP>:<PORT> --> <NodeIP>:<PORT>
The reason why I would like to do this is simply I don't want to rely on a specific NodeIP since the Pod can be rescheduled to another Node.
Thank you
A Kubernetes Service with type: NodePort opens the same port on every Node and sends the traffic to wherever the pod is currently scheduled using internal IP routing. You should be able to access the service at <AnyNodeIP>:<PORT>.
In Google's Kubernetes cluster, the master machines are not available for routing traffic. The way to reliably expose your services is to use a Service with type: LoadBalancer which will provide a single IP that resolves to your service regardless of which Nodes are up or what their IPs are.