I know we can set up application with internal or external ip address using load balancer. If I use external Ip address I can reserve it in Azure beforehand as public. Now my question is what if I don't want that ip address to be visible from outside the cluster ?
Configuration for internal ip address in kubernetes yaml would be:
apiVersion: v1
kind: Service
metadata:
name: internal-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
loadBalancerIP: 10.240.1.90
type: LoadBalancer
ports:
- port: 80
selector:
app: internal-app
Now I've read that the specified IP address must reside in the same subnet as the AKS cluster and must not already be assigned to a resource.
If I have ip address for my aks agentpool set up as X.X.0.0/16 and I use for example X.X.0.1 as Ip address for my internal load balancer I'm getting error: 'Private IP address is in reserved subnet range'
I see I also have something like internal endpoints in AKS. Can those be used for internal application-to-application communication ?
I'm just looking for any way for my apps to talk with each other internally with out exposing them to outside world. Also I'd like for that to be repeatable that means that something like dynamic ip addresses wouldn't be too good. I need the set up to be repeatable so I don't have to change all of the apps internal settings every time Ip address changes accidentally.
Easiest solution is just to use a service of type ClusterIP. it would create a virtual IP address inside the cluster that your apps can use to reach each other. You can also use the dns name of the service to reach it:
service-name.namespace.svc.cluster.local
from any pod inside kubernetes. either of these ways you dont have to care about ip addresses at all, kubernetes manages them
Related
I am following this guide to expose a service running on my bare metal k8s cluster to the world.
The guide suggests using metallb for giving external access. The problem is, during the setup process of metallb, I am asked to give a range of available IP addresses.
The hosting provider I am using is very basic, and all I have is the IP address of the Linux instance that is running my K8s node. So my question is, how can I provision an IP address for assigning to my application? Is this possible with a single IP?
Alternatively I'd love get this done with a NodePort, but I need to support HTTPS traffic and I am not sure its possible if I go that way.
Specify a single IP using CIDR notation. your-ip/32 (192.168.10.0/32 for example)
Single IP Address Load Balancer
Using a single IP address is possible. In this case you don't need the speaker pods that announce the external IP addresses and thus no pod security labels. So, if you install using helm, prepare a metallb-values-helm.yaml file:
speaker:
enabled: false
Then install metallb:
kubectl create namespace metallb-system
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb --namespace metallb-system -f metallb-values-helm.yaml
Now prepare a configuration of the public IP address in a metallb-config-ipaddress.yaml file:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: metallb-ip
namespace: metallb-system
spec:
addresses:
# The IP address of the kubernetes master node
- 192.168.178.10/32
And apply:
kubectl apply -f metallb-config-ipaddress.yaml
Multiple Services Sharing the Same IP Address
This should already work for a single service. However, if you want to apply multiple services on different ports of the same IP address, you need to provide an annotation in every service manifest, as described here. A service manifest will look like:
apiVersion: v1
kind: Service
metadata:
name: cool-service
namespace: default
annotations:
metallb.universe.tf/allow-shared-ip: "key-to-share-192.168.178.10"
...
The string "key-to-share-192.168.178.10" is arbitrary, but must be equal for all services. If there is really just a single IP address in your pool (as specified above), you don't have to specify it as loadBalancerIP: 192.168.178.10. This would be only required if you had multiple IP addresses and wanted to select one. So that's all.
What's next?
You can also use nginx-ingress as ingress controller behind your metallb load balancer, which is still required to expose the nginx-ingress service. Then you can e. g. separate your services via subdomains (pointing to the same IP address) like
service1.domain.com
service2.domain.com
I am in the process of learning Kubernetes and creating a sample application on GKE. I am able to create pods, containers, and services on minikube, however, got stuck when exposing it on the internet using my custom domain like hr.mydomain.com.
My application says file-process is running on port 8080, now I want to expose it to the internet. I tried creating the service of load balancer type on GKE. I get the IP of the load balancer and map it to A record of hr.mydomain.com.
My question is - If this service is restarted, does the service IP changes every time and the service becomes inaccessible?
How do I manage it? What are the best practices when mapping domain names to svc?
File service
apiVersion: v1
kind: Service
metadata:
name: file-process-service
labels:
app: file-process-service
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
selector:
app: file-process-api
Google Kubrnetes Engine is designed to take as much configuration hassle out of your hands as possible. Even if you restart the service nothing will change in regards to it's availability from the Internet.
Networking (including load balancing) is managed automatically withing the GKE cluster:
...Kubernetes uses Services to provide stable IP addresses for applications running within Pods. By default, Pods do not expose an external IP address, because kube-proxy manages all traffic on each node. Pods and their containers can communicate freely, but connections outside the cluster cannot access the Service. For instance, in the previous illustration, clients outside the cluster cannot access the frontend Service using its ClusterIP.
This means that if you expose the service and it will have external IP it will stay the same until the load balancer is deleted:
The network load balancer is aware of all nodes in your cluster and configures your VPC network's firewall rules to allow connections to the Service from outside the VPC network, using the Service's external IP address. You can assign a static external IP address to the Service.
At this point when you have a load balancer with static public IP in front of your service you can set this IP as an A record for your domain.
I created a kubernetes service in GKE with type:LoadBalancer.
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
selector:
app: nginx
It's a nginx service and try to get origin client IP. like
location / {
echo $remote_addr;
echo $http_x_forwarded_for;
}
But the result will get:
10.140.0.97
$remote_addr is like inside kubernetes IP.
$http_x_forwarded_for is empty.
I don't know why this is not like document said.
What I read
https://cloud.google.com/load-balancing/docs/network
Network Load Balancing is a pass-through load balancer, which means that your firewall rules must allow traffic from the client source IP addresses.
https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#ext-lb
If your Service needs to be reachable from outside the cluster and outside your VPC network, you can configure your Service as a LoadBalancer, by setting the Service's type field to LoadBalancer when defining the Service. GKE then provisions a Network Load Balancer in front of the Service. The Network Load Balancer is aware of all nodes in your cluster and configures your VPC network's firewall rules to allow connections to the Service from outside the VPC network, using the Service's external IP address. You can assign a static external IP address to the Service. Visit Configuring Domain Names with Static IP Addresses for more information.
Just add externalTrafficPolicy: Local
spec:
externalTrafficPolicy: Local
type: LoadBalancer
Packets sent to Services with Type=LoadBalancer are source NAT’d by default, because all schedulable Kubernetes nodes in the Ready state are eligible for load-balanced traffic. So if packets arrive at a node without an endpoint, the system proxies it to a node with an endpoint, replacing the source IP on the packet with the IP of the node (as described in the previous section).
Reference
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
https://github.com/kubernetes/kubernetes/issues/10921
Because Network Load Balancer handles all incoming traffics and redirect to your GKE cluster. Inside k8s cluster, everything is running under virtual IP network, so you get 10.140.0.97.
The 1st document says you need to setup firewall to accept traffics from client source IP, otherwise by GCP default you are not gonna get any incoming traffic. But 2nd document indicates that GKE will automatically setup for you. All you need to do is find out your external IP and give it a try. You should be able to see your nginx welcome page.
P.S. The default external IP is dynamic, if you want a static IP you can get one via console or gcloud CLI.
I want to assign a static (i.e. non-ephemeral) regional IP to a Kubernetes service. Currently the service is of type "LoadBalancer", which GCP exposes as a regional TCP load balancer. By default the IP address of the forwarding rule is ephemeral. Is there any way I can use an existing static ip or to assign my own address by name (as is possible with Ingress/HTTP(S) Load Balancer)?
I have also tried to create my own forwarding rule with a custom static regional IP using the NodePort of the service. I have only succeeded to build the forwarding rule using the actual NodePort, but how does the Kubernetes/GCP-magic work that maps port 80 to the NodePort when using type "LoadBalancer"?
I have found a way to set the static IP. After that I needed to delete the service object and re-create it.
- apiVersion: v1
kind: Service
spec:
loadBalancerIP: '<static ip>'
But the second part of my question I am still curious about
I need to access an internal application running on GKE Nginx Ingress service riding on Internal Load Balancer, from another GCP region.
I am fully aware that it is not possible using direct Google networking and it is a huge limitation (GCP Feature Request).
Internal Load Balancer can be accessed perfectly well via VPN tunnel from AWS, but I am not sure that creating such a tunnel between GCP regions under the same network is a good idea.
Workarounds are welcomed!
In the release notes from GCP, it is stated that:
Global access is an optional parameter for internal LoadBalancer Services that allows clients from any region in your VPC to access the internal TCP/UDP Load Balancer IP address.
Global access is enabled per-Service using the following annotation:
networking.gke.io/internal-load-balancer-allow-global-access: "true".
UPDATE: Below service works for GKE v1.16.x & newer versions:
apiVersion: v1
kind: Service
metadata:
name: ilb-global
annotations:
# Required to assign internal IP address
cloud.google.com/load-balancer-type: "Internal"
# Required to enable global access
networking.gke.io/internal-load-balancer-allow-global-access: "true"
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
For GKE v1.15.x and older versions:
Accessing internal load balancer IP from a VM sitting in a different region will not work. But this helped me to make the internal load balancer global.
As we know internal load balancer is nothing but a forwarding rule, we can use gcloud command to enable global access.
Firstly get the internal IP address of the Load Balancer using kubectl and save its IP like below:
# COMMAND:
kubectl get services/ilb-global
# OUTPUT:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ilb-global LoadBalancer 10.0.12.12 10.123.4.5 80:32400/TCP 18m
Note the value of "EXTERNAL-IP" or simply run the below command to make it even simpler:
# COMMAND:
kubectl get service/ilb-global \
-o jsonpath='{.status.loadBalancer.ingress[].ip}'
# OUTPUT:
10.123.4.5
GCP gives a randomly generated ID to the forwarding rule created for this Load Balancer. If you have multiple forwarding rules, use the following command to figure out which one is the internal load balancer you just created:
# COMMAND:
gcloud compute forwarding-rules list | grep 10.123.4.5
# OUTPUT
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
a26cmodifiedb3f8252484ed9d0192 asia-south1 10.123.4.5 TCP asia-south1/backendServices/a26cmodified44904b3f8252484ed9d019
NOTE: If you not working on Linux or grep is not installed, simply run gcloud compute forwarding-rules list and manually look for the forwarding rule having the IP address we are looking for.
Note the name of the forwarding-rule and run the following command to update the forwarding rule with --allow-global-access (remember adding beta, as it is still a beta feature):
# COMMAND:
gcloud beta compute forwarding-rules update a26cmodified904b3f8252484ed9d0192 \
--region asia-south1 --allow-global-access
# OUTPUT:
Updated [https://www.googleapis.com/compute/beta/projects/PROJECT/regions/REGION/forwardingRules/a26hehemodifiedhehe490252484ed9d0192].
And it's done. Now you can access this internal IP (10.123.4.5) from any instance in any region (but the same VPC network).
Another possible way is to implement the ngnix reverser proxy server on an compute engine in the same region as of GKE cluster, and use the internal IP of compute engine instance to communicate with the services of the GKE.
First of all, note that the only way to connect any GCP resource (in this case your GKE cluster) from an on premise location, it’s either through a Cloud Interconnect or VPN set up, which actually they must be in the same region and VPC to be able to communicate with each other.
Having said that, I see you won’t like to do that under the same VPC, therefore a workaround for your scenario could be:
Creating a Service of type LoadBalancer, so your cluster can be reachable through and external (public) IP by exposing this service. If you are worried about the security, you can use Istio to enforce access policies for example.
Or, to create an HTTP(S) load balancing with Ingress, so your cluster can be reachable through its external (public) IP. Where again, for security purposes you can use GCP Cloud Armor which actually so far works only for HTTP(S) Load Balancing.