Kubernetes + GCP TCP Load balancing: How can I assign a static IP to a Kubernetes Service? - kubernetes

I want to assign a static (i.e. non-ephemeral) regional IP to a Kubernetes service. Currently the service is of type "LoadBalancer", which GCP exposes as a regional TCP load balancer. By default the IP address of the forwarding rule is ephemeral. Is there any way I can use an existing static ip or to assign my own address by name (as is possible with Ingress/HTTP(S) Load Balancer)?
I have also tried to create my own forwarding rule with a custom static regional IP using the NodePort of the service. I have only succeeded to build the forwarding rule using the actual NodePort, but how does the Kubernetes/GCP-magic work that maps port 80 to the NodePort when using type "LoadBalancer"?

I have found a way to set the static IP. After that I needed to delete the service object and re-create it.
- apiVersion: v1
kind: Service
spec:
loadBalancerIP: '<static ip>'
But the second part of my question I am still curious about

Related

Domain Name mapping to K8 service type of load balancer on GKE

I am in the process of learning Kubernetes and creating a sample application on GKE. I am able to create pods, containers, and services on minikube, however, got stuck when exposing it on the internet using my custom domain like hr.mydomain.com.
My application says file-process is running on port 8080, now I want to expose it to the internet. I tried creating the service of load balancer type on GKE. I get the IP of the load balancer and map it to A record of hr.mydomain.com.
My question is - If this service is restarted, does the service IP changes every time and the service becomes inaccessible?
How do I manage it? What are the best practices when mapping domain names to svc?
File service
apiVersion: v1
kind: Service
metadata:
name: file-process-service
labels:
app: file-process-service
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
selector:
app: file-process-api
Google Kubrnetes Engine is designed to take as much configuration hassle out of your hands as possible. Even if you restart the service nothing will change in regards to it's availability from the Internet.
Networking (including load balancing) is managed automatically withing the GKE cluster:
...Kubernetes uses Services to provide stable IP addresses for applications running within Pods. By default, Pods do not expose an external IP address, because kube-proxy manages all traffic on each node. Pods and their containers can communicate freely, but connections outside the cluster cannot access the Service. For instance, in the previous illustration, clients outside the cluster cannot access the frontend Service using its ClusterIP.
This means that if you expose the service and it will have external IP it will stay the same until the load balancer is deleted:
The network load balancer is aware of all nodes in your cluster and configures your VPC network's firewall rules to allow connections to the Service from outside the VPC network, using the Service's external IP address. You can assign a static external IP address to the Service.
At this point when you have a load balancer with static public IP in front of your service you can set this IP as an A record for your domain.

How can I visit NodePort service in GKE

This is my Service yaml. When create the svc on GKE.I don't know how can I visit the svc.I can't find a external ip for visiting the svc. How can I visit this svc in standard flow. Is it need to create an ingress?
apiVersion: v1
kind: Service
metadata:
namespace: dev
name: ui-svc
labels:
targetEnv: dev
app: ui-svc
spec:
selector:
app: ui
targetEnv: dev
ports:
- name: ui
port: 8080
targetPort: 8080
nodePort: 30080
type: NodePort
If you don't use a private cluster where nodes don't have public IP addresses, you can access your NodePort services using any node's public IP address.
What you can see in Services & Ingresses section in the Endpoints column, it's an internal, cluster ip address of your NodePort service.
If you want to know what are public IP addresses of your GKE nodes, please go to Compute Engine > VM instances:
You will see the list of all your Compute Engine VMs which also includes your GKE nodes. Note the IP address in External IP column. You should use it along with port number which you may check in your NodePort service details. Simply click on it's name "ui-svc" to see the details. At the very bottom of the page you should see ports section which may look as follows:
So in my case I should use <any_node's_public_ip_address>:31251.
One more important thing. Don't forget to allow traffic to this port on Firewall as by default it is blocked. So you need to explicitly allow traffic to your nodes e.g. on 31251 port to be able to access it from public internet. Simply go to VPC Network > Firewall and set the apropriate rule:
UPDATE:
If you created an Autopilot Cluster, by default it is a public one, which means its nodes have public IP addresses:
If during the cluster creation you've selected a second option i.e. "Private cluster", your nodes won't have public IPs by design and you won't be able to access your NodePort service on any public IP. So the only option that remains in such scenario is exposing your workload via LoadBalancer service or Ingress, where a single public IP endpoint is created for you, so you can access your workload externally.
However if you've chosen the default option i.e. "Public cluster", you can use your node's public IP's to access your NodePort service in the very same way as if you used a Standard (non-autopilot) cluster.
Of course in autopilot mode you won't see your nodes as compute engine VMs in your GCP console, but you can still get their public IPs by running:
kubectl get nodes -o wide
They will be shown in EXTERNAL-IP column.
To connect to your cluster simply click on 3 dots you can see to the right of the cluster name ("Kubernetes Engine" > "Clusters") > click "Connect" > click "RUN IN CLOUD SHELL".
Since you don't know what network tags have been assigned to your GKE auto-pilot nodes (if any) as you don't manage them and they are not shown in your GCP console, you won't be able to use specified network tags when defining a firewall rule to allow access to your NodePort service port e.g. 30543 and you would have to choose the option "All instances in the network" instead:

How to get client ip from Google Network Load Balancer with kubernetes service

I created a kubernetes service in GKE with type:LoadBalancer.
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
selector:
app: nginx
It's a nginx service and try to get origin client IP. like
location / {
echo $remote_addr;
echo $http_x_forwarded_for;
}
But the result will get:
10.140.0.97
$remote_addr is like inside kubernetes IP.
$http_x_forwarded_for is empty.
I don't know why this is not like document said.
What I read
https://cloud.google.com/load-balancing/docs/network
Network Load Balancing is a pass-through load balancer, which means that your firewall rules must allow traffic from the client source IP addresses.
https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#ext-lb
If your Service needs to be reachable from outside the cluster and outside your VPC network, you can configure your Service as a LoadBalancer, by setting the Service's type field to LoadBalancer when defining the Service. GKE then provisions a Network Load Balancer in front of the Service. The Network Load Balancer is aware of all nodes in your cluster and configures your VPC network's firewall rules to allow connections to the Service from outside the VPC network, using the Service's external IP address. You can assign a static external IP address to the Service. Visit Configuring Domain Names with Static IP Addresses for more information.
Just add externalTrafficPolicy: Local
spec:
externalTrafficPolicy: Local
type: LoadBalancer
Packets sent to Services with Type=LoadBalancer are source NAT’d by default, because all schedulable Kubernetes nodes in the Ready state are eligible for load-balanced traffic. So if packets arrive at a node without an endpoint, the system proxies it to a node with an endpoint, replacing the source IP on the packet with the IP of the node (as described in the previous section).
Reference
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
https://github.com/kubernetes/kubernetes/issues/10921
Because Network Load Balancer handles all incoming traffics and redirect to your GKE cluster. Inside k8s cluster, everything is running under virtual IP network, so you get 10.140.0.97.
The 1st document says you need to setup firewall to accept traffics from client source IP, otherwise by GCP default you are not gonna get any incoming traffic. But 2nd document indicates that GKE will automatically setup for you. All you need to do is find out your external IP and give it a try. You should be able to see your nginx welcome page.
P.S. The default external IP is dynamic, if you want a static IP you can get one via console or gcloud CLI.

AKS using Internal endpoint for communication

I know we can set up application with internal or external ip address using load balancer. If I use external Ip address I can reserve it in Azure beforehand as public. Now my question is what if I don't want that ip address to be visible from outside the cluster ?
Configuration for internal ip address in kubernetes yaml would be:
apiVersion: v1
kind: Service
metadata:
name: internal-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
loadBalancerIP: 10.240.1.90
type: LoadBalancer
ports:
- port: 80
selector:
app: internal-app
Now I've read that the specified IP address must reside in the same subnet as the AKS cluster and must not already be assigned to a resource.
If I have ip address for my aks agentpool set up as X.X.0.0/16 and I use for example X.X.0.1 as Ip address for my internal load balancer I'm getting error: 'Private IP address is in reserved subnet range'
I see I also have something like internal endpoints in AKS. Can those be used for internal application-to-application communication ?
I'm just looking for any way for my apps to talk with each other internally with out exposing them to outside world. Also I'd like for that to be repeatable that means that something like dynamic ip addresses wouldn't be too good. I need the set up to be repeatable so I don't have to change all of the apps internal settings every time Ip address changes accidentally.
Easiest solution is just to use a service of type ClusterIP. it would create a virtual IP address inside the cluster that your apps can use to reach each other. You can also use the dns name of the service to reach it:
service-name.namespace.svc.cluster.local
from any pod inside kubernetes. either of these ways you dont have to care about ip addresses at all, kubernetes manages them

Expose port of a specific container to external IP

I'm deploying a helm chart that consists of a service with three replica containers. I've been following these directions for exposing a service to an external IP address.
How do I expose a port per container or per pod? I explicitly do not want to expose a load balancer that maps that port onto some (but any) pod in the service. The service in question is part of a stateful set, and to clients on the outside it matters which of the three are being contacted, so I can't abstract that away behind a load balancer.
You need to create a new service for every pod in you stateful set. To distinguish pods you need to label them with their names like described here
When you have separate services you can use them individually in ingress.
Just adding the official Kubernetes documentation about creating a service:
https://kubernetes.io/docs/concepts/services-networking/service/
A Service in Kubernetes is a REST object, similar to a Pod. Like all of the REST objects, a Service definition can be POSTed to the apiserver to create a new instance. For example, suppose you have a set of Pods that each expose port 9376 and carry a label "app=MyApp".
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
This specification will create a new Service object named “my-service” which targets TCP port 9376 on any Pod with the "app=MyApp" label. This Service will also be assigned an IP address (sometimes called the “cluster IP”), which is used by the service proxies (see below). The Service’s selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named “my-service”.
Note that a Service can map an incoming port to any targetPort. By default the targetPort will be set to the same value as the port field. Perhaps more interesting is that targetPort can be a string, referring to the name of a port in the backend Pods. The actual port number assigned to that name can be different in each backend Pod. This offers a lot of flexibility for deploying and evolving your Services. For example, you can change the port number that pods expose in the next version of your backend software, without breaking clients.
Kubernetes Services support TCP and UDP for protocols. The default is TCP.