Forwarding traffic from a DigitalOcean Load Balancer to a Kubernetes Service not working - kubernetes

I created a kubernetes service that is exposed via type: nodePort. I can access the service in my browser if I enter http://PublicDropletIp:31433.
Now I want to use a DigitalOcean Load Balancer to forward traffic from port 80 to the service. So I set a rule for the Load Balancer to forward http/80 to Droplet http/31433.
Unforutnatly this doesn't work. If I enter the load balancer IP in the browser I get: 503 Service Unavailable.
Does anyone know how I can expose the service so that the Load Balancer can forward traffic to it?

I had this same issue and ended up on this thread. If anyone else is looking, I resolved it by configuring the firewall on my server.
To answer the question above, the firewall should be configured to accept tcp connections from the load balancer's ip on port 31433.

Related

Kubernetes with Metallb and Traefik behind OpenVPN

I have a Kubernetes cluster installed on a bare metal server, I have installed Metallb for external load balancer and Traefik for reverse proxy engine, this cluster is behind an OpenVPN with subnet 10.1.0.0/24, the ip for the server is 10.1.0.1
For the Metallb I assign ip 10.1.1.0/24 for the pool, so the Traefik LoadBalancer ip is 10.1.1.1
I also have my own Domain Name Server that will be pushed to the server when connected to the VPN
If I create a domain for one of my app inside the Kubernetes Cluster, to what IP should I point my Domain so that I can access my app through the domain from other server that are also connected to the VPN?
I think I misconfigured something but I got stuck
You need to point the domain to the Traefik Loadbalancer IP, which is ‘10.1.1.1’ in your case. The IP address which you have shared is an external IP provided by Metallb.
When the client connects to the application using the domain; the DNS resolution process will resolve the domain to the Traefik LB IP. The traffic will be forwarded to the Traefik LB IP, which will route the traffic to the appropriate service and the pod in your Kubernetes cluster based on the rules defined in your Trafik. Please check this Here is the blog posted by Peter Gillich for your reference.

SSL termination for Kubernetes NGINX Ingress load balancer exposing a EMQX broker in GCP

I am currently trying to do a SSL termination for a EMQX Broker implemented in GKE.
The implementation of the EMQX broker exposed by a NGINX Ingress L4 load balancer was successful. I am able to display the dashboard and connect to the broker successfully via the LB IP.
I've tried creating an NGINX Ingress pointing to the broker and the L4 load balancer but can't add SSL to it via a google managed certificate.
I've also tried creating a Google TPC/UDP Load Balancer but only the dashboard is displayed and I can't connect to the broker maybe because the HTTP to TCP traffic is not pointed to the correct port? I'm not sure.
I thought that maybe implemented a L7 Load Balancer that points to the Backend Service created by the Ingress pointing to the L4 Load Balancer ports would be an option but couldn't make it work.
Has anyone been able to implement this architecture and can provide me with an example of it? Basically I want to connect to the broker via WSS with a custom domain using Kubernetes with a google managed certificate.
Thanks.

Kubernetes outbound request to use service IP

In our Kubernetes deployment, we have deployed a WebApp on a deployment controller and created a Load-balancer for external access.
So all the inbound request is getting load-balanced by load-balancer and works fine.
But we are facing issue with our outbound request.In our case external application can only accept traffic from whitelisted IP addresses so we wanted to give load-balancer ip which will then get whitelisted as pods are ephemeral in nature and their IP will not be static.
But as request are originating from pod, it keeps the source ip of pod and then external application drops the request.
Is there a way in which pod can send outbound request using source as service ip, or can source ip be masked by service Ip?
You can potentially use a egress gateway for this. Istio provides Envoy as a egress gateway proxy. From your service inside the cluster all outbound traffic will be routed through this egress proxy. You can configure TLS origination at the proxy before the traffic is send to the external service. You need to then whitelist the IP of the egress gateway in your external service.
Other option will be to have a reverse proxy in front of that external service and terminate traffic from service inside kubernetes and start a new TCP session from the reverse proxy to the external service. In this case the reverse proxy accepts connection from any origin IP but the external service only receives request originated from the proxy. You can configure the proxy to pass the actual client IP in a http header typically X-Forwarded-Host
https://istio.io/docs/tasks/traffic-management/egress/
I am assuming you are using Kubernetes in IPv4 mode. When you are accessing an external IP address from the kubernetes pod, the requests are source NAT'd. This would mean that the packet would have the IP address of the host's (VM?) ethernet interface through which the traffic flows out. Please whitelist this IP and see if that helps
This would be a good reference: https://www.youtube.com/watch?v=0Omvgd7Hg1I
Please note that service IP is useful when other services want to discover and talk to other services and IP table (in kube-proxy ip-table mode) translates it to POD IP. Its not in play for the traffic originating from the given service

Why is Azure Load Balancer created by AKS set up to direct traffic to port 80 and 443 on nodes rather than nodeports opened by a service?

I have an AKS cluster with an nginx ingress controller. Controller has created a service with a type LoadBalancer and Ports section looks like this (from kubectl get service):
80:31141/TCP
If I understand things correctly port 80 is a ClusterIp port that is not reachable from the outside but port 31141 is a port that is a NodePort and is reachable from outside. So I would assume that an Azure Load Balancer is sending traffic to this 31141 port.
I was surprised to find that Azure Load Balancer is set up with a rule:
frontendPort: 80
backendPort: 80
probe (healthCheck): 31141
So it actually does use the nodeport but only as a healthcheck and all traffic is sent to port 80 which presumably functions the same way as 31141.
A curious note is that if I try to reach the node IP at port 80 from a pod I only get "connection refused", but I suppose it does work if traffic comes from a load balancer.
I was not able to find any information about this on internet, so the question is how this really works and why is ALB doing it this way?
P.S. I don't have troubles with connectivity, it works. I am just trying to understand how and why it does behind the scenes.
I think I have figured out how that works (disclaimer: my understanding might not be correct, please correct me if it's wrong).
What happens is that load balanced traffic does not reach the node itself on port 80 nor does it reach it on an opened node port (31141 in my case). Instead the traffic that is sent to the node is not "handled" by the node itself but rather routed further with the help of iptables. So if some traffic hits the node with the destination IP of the LB frontendIP and port 80 it goes to the service and further to the pod.
As for health check I suppose it does not use the same port 80 because the request would not have a destination equal to the external IP (LB frontend IP) and rather the node itself directly, then it uses the service nodePort for that reason.
As I see, you have some misunderstandings about the ingress ports. Let me show you some details about the ingress in AKS.
Ingress info:
From the screenshot, the ports 80 and 443 are the ports of the Azure LB which you can access from the Internet with the public IP associated with the LB, here the public IP is 40.121.64.51. And the ports 31282 and 31869 are the ports of the AKS nodes which you cannot access from the Internet, you can only access them from the vnet through the node private IP.
Azure LB info:
heath probes:
lb rules:
From the screenshots, you can see the health probes and the rules of the Azure LB. It uses them to redirect the traffic from the Internet to the AKS nodes' ports, and the nodes are the backend of the Azure LB.
Hope it helps you understand the traffic of the ingress in AKS.
Update:
The LB rules info:

Is it possible to find incoming IP addresses in Google Container Engine cluster?

My nginx access log deployed in a GKE Kubernetes cluster (with type LoadBalancer Kubernetes service) shows internal IPs instead of real visitor IP.
Is there a way to find real IPs anywhere? maybe some log file provided by GKE/Kubernetes?
Right now, the type: LoadBalancer service does a double hop. The external request is balanced among all the cluster's nodes, and then kube-proxy balances amongst the actual service backends.
kube-proxy NATs the request. E.g. a client request from 1.2.3.4 to your external load balancer at 100.99.98.97 gets NATed in the node to 10.128.0.1->10.100.0.123 (node's private IP to pod's cluster IP). So the "src ip" you see in the backend is actually the private IP of the node.
There is a feature planned with a corresponding design proposal for preservation of client IPs of LoadBalancer services.
You could use the real IP module for nginx.
Pass your internal GKE net as a set_real_ip_from directive and you'll see the real client IP in your logs:
set_real_ip_from 192.168.1.0/24;
Typically you would add to the nginx configuration:
The load balancers IP
i.e. the IP that you see in your logs instead of the real client IP currently
The kubernetes network
i.e. the subnet your Pods are in, the "Docker subnet"
Adding of these lines to my nginx.conf HTTP block fixed this issue for me and real visitor IPs started displaying in Stackdriver LogViewer:
http {
...
real_ip_recursive on;
real_ip_header X-Forwarded-For;
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.0.0/24;
set_real_ip_from 10.0.0.0/8;
...
}
I'm a happy camper :)