Load balancing subdomains in Azure Service Fabric - azure-service-fabric

I have two web apps sitting in my FrontEnd node type of a Service Fabric cluster. One app is listening on port 7000, the other 8000, but both ultimately sitting behind the same load balancer with public IP 1.2.3.4.
I would like to configure the DNS to point both app1.mydomain.com and app2.mydomain.com to IP 1.2.3.4, but the load balancer to route route app1 (port 80 or 443) subdomain requests to port 7000 and app2 to port 8000.
Is this possible, or do I need to set up two load balancers with two public IPs?

It seems I can add multiple IP addresses to the same Load Balancer, and associated different rules for each IP address added.

Related

Expose services inside Kubernetes cluster behind NAT

I have a GKE cluster set up with Cloud NAT, so traffic from any node/container going outward would have the same external IP. (I needed this for whitelisting purposes while working with 3rd-party services).
Now, if I want to deploy a proxy server onto this cluster that does basic traffic forwarding, how do I expose the proxy server "endpoint"? Or more generically, how do I expose a service if I deploy it to this GKE cluster?
Proxy server running behind NAT ?
Bad idea, unless it is only for your kubernetes cluster workload, but you didn't specify anywhere that it should be reachable only by other Pods running in the same cluster.
As you can read here:
Cloud NAT does not implement unsolicited inbound connections from the
internet. DNAT is only performed for packets that arrive as responses
to outbound packets.
So it is not meant to be reachable from outside.
If you want to expose any application within your cluster, making it available for other Pods, use simple ClusterIP Service which is the default type and it will be created as such even if you don't specify its type at all.
Normally, to expose a service endpoint running on a Kubernetes cluster, you have to use one of the Service Types, as Pods have internal IP addresses and are not addressable externally.
The possible service types:
ClusterIP: this also uses an internal IP address, and is therefore not addressable externally.
NodePort: this type opens a port on every node in your Kubernetes cluster, and configures iptables to forward traffic arriving to this port into the Pods providing the actual service.
LoadBalancer: this type opens a port on every node as with NodePort, and also allocates a Google Cloud Load Balancer service, and configures that service to access the port opened on the Kubernetes nodes (actually load balancing the incoming traffic between your operation Kubernetes nodes).
ExternalName: this type configures the Kubernetes internal DNS server to point to the specified IP address (to provide a dynamic DNS entry inside the cluster to connect to external services).
Out of those, NodePort and LoadBalancer are usable for your purposes. With a simple NodePort type Service, you would need publicly accessible node IP addresses, and the port allocated could be used to access your proxy service through any node of your cluster. As any one of your nodes may disappear at any time, this kind of service access is only good if your proxy clients know how to switch to another node IP address. Or you could use the LoadBalancer type Service, in that case you can use the IP address of the configured Google Cloud Load Balancer for your clients to connect to, and expect the load balancer to forward the traffic to any one of the running nodes of your cluster, which would then forward the traffic to one of the Pods providing this service.
For your proxy server to access the Internet as a client, you also need some kind of public IP address. Either you give the Kubernetes nodes public IP addresses (in that case, if you have more than a single node, you'd see multiple source IP addresses, as each node has its own IP address), or if you use private addresses for your Kubernetes nodes, you need a Source NAT functionality, like the one you already use: Cloud NAT

Kubernetes outbound request to use service IP

In our Kubernetes deployment, we have deployed a WebApp on a deployment controller and created a Load-balancer for external access.
So all the inbound request is getting load-balanced by load-balancer and works fine.
But we are facing issue with our outbound request.In our case external application can only accept traffic from whitelisted IP addresses so we wanted to give load-balancer ip which will then get whitelisted as pods are ephemeral in nature and their IP will not be static.
But as request are originating from pod, it keeps the source ip of pod and then external application drops the request.
Is there a way in which pod can send outbound request using source as service ip, or can source ip be masked by service Ip?
You can potentially use a egress gateway for this. Istio provides Envoy as a egress gateway proxy. From your service inside the cluster all outbound traffic will be routed through this egress proxy. You can configure TLS origination at the proxy before the traffic is send to the external service. You need to then whitelist the IP of the egress gateway in your external service.
Other option will be to have a reverse proxy in front of that external service and terminate traffic from service inside kubernetes and start a new TCP session from the reverse proxy to the external service. In this case the reverse proxy accepts connection from any origin IP but the external service only receives request originated from the proxy. You can configure the proxy to pass the actual client IP in a http header typically X-Forwarded-Host
https://istio.io/docs/tasks/traffic-management/egress/
I am assuming you are using Kubernetes in IPv4 mode. When you are accessing an external IP address from the kubernetes pod, the requests are source NAT'd. This would mean that the packet would have the IP address of the host's (VM?) ethernet interface through which the traffic flows out. Please whitelist this IP and see if that helps
This would be a good reference: https://www.youtube.com/watch?v=0Omvgd7Hg1I
Please note that service IP is useful when other services want to discover and talk to other services and IP table (in kube-proxy ip-table mode) translates it to POD IP. Its not in play for the traffic originating from the given service

Why is Azure Load Balancer created by AKS set up to direct traffic to port 80 and 443 on nodes rather than nodeports opened by a service?

I have an AKS cluster with an nginx ingress controller. Controller has created a service with a type LoadBalancer and Ports section looks like this (from kubectl get service):
80:31141/TCP
If I understand things correctly port 80 is a ClusterIp port that is not reachable from the outside but port 31141 is a port that is a NodePort and is reachable from outside. So I would assume that an Azure Load Balancer is sending traffic to this 31141 port.
I was surprised to find that Azure Load Balancer is set up with a rule:
frontendPort: 80
backendPort: 80
probe (healthCheck): 31141
So it actually does use the nodeport but only as a healthcheck and all traffic is sent to port 80 which presumably functions the same way as 31141.
A curious note is that if I try to reach the node IP at port 80 from a pod I only get "connection refused", but I suppose it does work if traffic comes from a load balancer.
I was not able to find any information about this on internet, so the question is how this really works and why is ALB doing it this way?
P.S. I don't have troubles with connectivity, it works. I am just trying to understand how and why it does behind the scenes.
I think I have figured out how that works (disclaimer: my understanding might not be correct, please correct me if it's wrong).
What happens is that load balanced traffic does not reach the node itself on port 80 nor does it reach it on an opened node port (31141 in my case). Instead the traffic that is sent to the node is not "handled" by the node itself but rather routed further with the help of iptables. So if some traffic hits the node with the destination IP of the LB frontendIP and port 80 it goes to the service and further to the pod.
As for health check I suppose it does not use the same port 80 because the request would not have a destination equal to the external IP (LB frontend IP) and rather the node itself directly, then it uses the service nodePort for that reason.
As I see, you have some misunderstandings about the ingress ports. Let me show you some details about the ingress in AKS.
Ingress info:
From the screenshot, the ports 80 and 443 are the ports of the Azure LB which you can access from the Internet with the public IP associated with the LB, here the public IP is 40.121.64.51. And the ports 31282 and 31869 are the ports of the AKS nodes which you cannot access from the Internet, you can only access them from the vnet through the node private IP.
Azure LB info:
heath probes:
lb rules:
From the screenshots, you can see the health probes and the rules of the Azure LB. It uses them to redirect the traffic from the Internet to the AKS nodes' ports, and the nodes are the backend of the Azure LB.
Hope it helps you understand the traffic of the ingress in AKS.
Update:
The LB rules info:

Forwarding traffic from a DigitalOcean Load Balancer to a Kubernetes Service not working

I created a kubernetes service that is exposed via type: nodePort. I can access the service in my browser if I enter http://PublicDropletIp:31433.
Now I want to use a DigitalOcean Load Balancer to forward traffic from port 80 to the service. So I set a rule for the Load Balancer to forward http/80 to Droplet http/31433.
Unforutnatly this doesn't work. If I enter the load balancer IP in the browser I get: 503 Service Unavailable.
Does anyone know how I can expose the service so that the Load Balancer can forward traffic to it?
I had this same issue and ended up on this thread. If anyone else is looking, I resolved it by configuring the firewall on my server.
To answer the question above, the firewall should be configured to accept tcp connections from the load balancer's ip on port 31433.

Kubernetes Networking on Outbound Packet

I have created a k8s service (type=loadbalancer) with a numbers of pods behind. To my understanding, all packets initiazed from the pods will have the source ip as PodIP, wheareas those responding the inbound traffic will have the source ip as LoadBalancer IP. So my questions are:
Is my claim true, or there are times the source IP will be the node IP instead?
Are there any tricks in k8s, which I can change the source IP in the first scenario from PodIP to LB IP??
Any way to specify a designated pod IP??
The Pods are running in the internal network while the load balancer is exposed on the Internet, so the addresses of the packets will look more or less like this:
[pod1] <-----> [load balancer] <-----> [browser]
10.1.0.123 10.1.0.234 201.123.41.53 217.123.41.53
For specifying the pod IP have a look at SessionAffinity.
As user315902 said, Azure ACS k8s exposed service to internet with Azure load balancer.
Architectural diagram of Kubernetes deployed via Azure Container Service:
Is my claim true, or there are times the source IP will be the node IP
instead?
If we expose the service to internet, I think the source IP will be the load balancer public IP address. In ACS, if we expose multiple services to internet, Azure LB will add multiple public IP addresses.
Are there any tricks in k8s, which I can change the source IP in the
first scenario from PodIP to LB IP??
Do you mean you want to use node public IP address to expose the service to internet? if yes, I think we can't use node IP to expose service to internet. In Azure, we had to use LB to expose service to internet.