Connection error beetwen shared vpc and vpc peering - kubernetes

I have this architecture in GCP.
Network project: 2 VPC, dev and pro, with two subnets each one. Subnet A is in europe-west1 and subnet B is in us-central1. The same in pro VPC. Both VPC are shared to other projects and I have a GKE cluster on each other project; so I have one GKE cluster in subnet A in VPC dev, one GKE cluster in subnet B in VPC dev and the same in pro VPC.
I have other project(project B) with some tools like Grafana, vault etc with other vpc and a subnet in europe-west1 which I have connected to dev VPC by VPC peering. In this project, I have GKE clusters too.
My issue is, if I want to connect from project B to any GKE cluster in dev VPC, if I try to connect with subnet A works fine but with subnet B it doesn't and I don't know why.
I have tried creating firewall rules but it doesn't work, even with firewall rules that allow all the traffic and all the ports.
Edit:
Now I can ping from GKE cluster in project B to a pod or node in subnet B but I can't access an Internal IP load balancer in subnet B (I have the same service in subnet A and it works fine). I'm trying with firewall rules that allows all the in/out traffic and all the ports but still not working
My architecture are this one:
My issue now is I can curl (for example) from GKE cluster 5 to an internal Load Balancer service in GKE cluster 1 and 3 but I can't to GKE cluster 2 and 4.
This is my service, it works because from another pod inside the cluster, curl works
apiVersion: v1
kind: Service
metadata:
annotations:
networking.gke.io/load-balancer-type: Internal
name: prometheus
namespace: monitoring
spec:
ports:
- name: http
port: 9090
protocol: TCP
targetPort: 9090
selector:
app: prometheus
prometheus: prometheus-kube-prometheus-prometheus
sessionAffinity: None
type: LoadBalancer
Edit: I uploaded the image from network tests:
Thanks

set up a VPC Network Peering between the two networks and then set up routes that direct traffic to the appropriate subnet. Additionally, you may need to create firewall rules in both networks to allow the traffic to flow through. For example, you could create a firewall rule to allow traffic from Project B's subnet to the dev VPC Subnet B. You could also create a firewall rule to allow traffic from the dev VPC Subnet B to Project B's subnet. Finally, you may need to configure the network settings on your GKE clusters to enable the traffic to flow.
Can you have a check as VPC sharing is not possible with private mail id and org mail id is necessary as it involves IAM. Refer to this VPC peering restrictions
Refer this doc Adding firewall rules for specific use cases and also refer to this SO on How to open a specific port such as 9090.
As per official doc If dynamic routing mode of the exporting VPC network is regional, then that network exports dynamic routes only in the same region. if your network is global then only GKE5 will interact with GKE2 and GKE4 as both are not in same region.
Please check this doc for some connectivity test and try to test the connectivity this will arrest the possibility that the firewall/ routes issue

Finally I found the solution. The Load Balancer needs this annotation:
networking.gke.io/internal-load-balancer-allow-global-access: "true"
I tried it and it works
Thanks to all

Related

Domain Name mapping to K8 service type of load balancer on GKE

I am in the process of learning Kubernetes and creating a sample application on GKE. I am able to create pods, containers, and services on minikube, however, got stuck when exposing it on the internet using my custom domain like hr.mydomain.com.
My application says file-process is running on port 8080, now I want to expose it to the internet. I tried creating the service of load balancer type on GKE. I get the IP of the load balancer and map it to A record of hr.mydomain.com.
My question is - If this service is restarted, does the service IP changes every time and the service becomes inaccessible?
How do I manage it? What are the best practices when mapping domain names to svc?
File service
apiVersion: v1
kind: Service
metadata:
name: file-process-service
labels:
app: file-process-service
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
selector:
app: file-process-api
Google Kubrnetes Engine is designed to take as much configuration hassle out of your hands as possible. Even if you restart the service nothing will change in regards to it's availability from the Internet.
Networking (including load balancing) is managed automatically withing the GKE cluster:
...Kubernetes uses Services to provide stable IP addresses for applications running within Pods. By default, Pods do not expose an external IP address, because kube-proxy manages all traffic on each node. Pods and their containers can communicate freely, but connections outside the cluster cannot access the Service. For instance, in the previous illustration, clients outside the cluster cannot access the frontend Service using its ClusterIP.
This means that if you expose the service and it will have external IP it will stay the same until the load balancer is deleted:
The network load balancer is aware of all nodes in your cluster and configures your VPC network's firewall rules to allow connections to the Service from outside the VPC network, using the Service's external IP address. You can assign a static external IP address to the Service.
At this point when you have a load balancer with static public IP in front of your service you can set this IP as an A record for your domain.

How can I visit NodePort service in GKE

This is my Service yaml. When create the svc on GKE.I don't know how can I visit the svc.I can't find a external ip for visiting the svc. How can I visit this svc in standard flow. Is it need to create an ingress?
apiVersion: v1
kind: Service
metadata:
namespace: dev
name: ui-svc
labels:
targetEnv: dev
app: ui-svc
spec:
selector:
app: ui
targetEnv: dev
ports:
- name: ui
port: 8080
targetPort: 8080
nodePort: 30080
type: NodePort
If you don't use a private cluster where nodes don't have public IP addresses, you can access your NodePort services using any node's public IP address.
What you can see in Services & Ingresses section in the Endpoints column, it's an internal, cluster ip address of your NodePort service.
If you want to know what are public IP addresses of your GKE nodes, please go to Compute Engine > VM instances:
You will see the list of all your Compute Engine VMs which also includes your GKE nodes. Note the IP address in External IP column. You should use it along with port number which you may check in your NodePort service details. Simply click on it's name "ui-svc" to see the details. At the very bottom of the page you should see ports section which may look as follows:
So in my case I should use <any_node's_public_ip_address>:31251.
One more important thing. Don't forget to allow traffic to this port on Firewall as by default it is blocked. So you need to explicitly allow traffic to your nodes e.g. on 31251 port to be able to access it from public internet. Simply go to VPC Network > Firewall and set the apropriate rule:
UPDATE:
If you created an Autopilot Cluster, by default it is a public one, which means its nodes have public IP addresses:
If during the cluster creation you've selected a second option i.e. "Private cluster", your nodes won't have public IPs by design and you won't be able to access your NodePort service on any public IP. So the only option that remains in such scenario is exposing your workload via LoadBalancer service or Ingress, where a single public IP endpoint is created for you, so you can access your workload externally.
However if you've chosen the default option i.e. "Public cluster", you can use your node's public IP's to access your NodePort service in the very same way as if you used a Standard (non-autopilot) cluster.
Of course in autopilot mode you won't see your nodes as compute engine VMs in your GCP console, but you can still get their public IPs by running:
kubectl get nodes -o wide
They will be shown in EXTERNAL-IP column.
To connect to your cluster simply click on 3 dots you can see to the right of the cluster name ("Kubernetes Engine" > "Clusters") > click "Connect" > click "RUN IN CLOUD SHELL".
Since you don't know what network tags have been assigned to your GKE auto-pilot nodes (if any) as you don't manage them and they are not shown in your GCP console, you won't be able to use specified network tags when defining a firewall rule to allow access to your NodePort service port e.g. 30543 and you would have to choose the option "All instances in the network" instead:

How to get client ip from Google Network Load Balancer with kubernetes service

I created a kubernetes service in GKE with type:LoadBalancer.
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
selector:
app: nginx
It's a nginx service and try to get origin client IP. like
location / {
echo $remote_addr;
echo $http_x_forwarded_for;
}
But the result will get:
10.140.0.97
$remote_addr is like inside kubernetes IP.
$http_x_forwarded_for is empty.
I don't know why this is not like document said.
What I read
https://cloud.google.com/load-balancing/docs/network
Network Load Balancing is a pass-through load balancer, which means that your firewall rules must allow traffic from the client source IP addresses.
https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#ext-lb
If your Service needs to be reachable from outside the cluster and outside your VPC network, you can configure your Service as a LoadBalancer, by setting the Service's type field to LoadBalancer when defining the Service. GKE then provisions a Network Load Balancer in front of the Service. The Network Load Balancer is aware of all nodes in your cluster and configures your VPC network's firewall rules to allow connections to the Service from outside the VPC network, using the Service's external IP address. You can assign a static external IP address to the Service. Visit Configuring Domain Names with Static IP Addresses for more information.
Just add externalTrafficPolicy: Local
spec:
externalTrafficPolicy: Local
type: LoadBalancer
Packets sent to Services with Type=LoadBalancer are source NAT’d by default, because all schedulable Kubernetes nodes in the Ready state are eligible for load-balanced traffic. So if packets arrive at a node without an endpoint, the system proxies it to a node with an endpoint, replacing the source IP on the packet with the IP of the node (as described in the previous section).
Reference
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
https://github.com/kubernetes/kubernetes/issues/10921
Because Network Load Balancer handles all incoming traffics and redirect to your GKE cluster. Inside k8s cluster, everything is running under virtual IP network, so you get 10.140.0.97.
The 1st document says you need to setup firewall to accept traffics from client source IP, otherwise by GCP default you are not gonna get any incoming traffic. But 2nd document indicates that GKE will automatically setup for you. All you need to do is find out your external IP and give it a try. You should be able to see your nginx welcome page.
P.S. The default external IP is dynamic, if you want a static IP you can get one via console or gcloud CLI.

Accessing GCP Internal Load Balancer from another region

I need to access an internal application running on GKE Nginx Ingress service riding on Internal Load Balancer, from another GCP region.
I am fully aware that it is not possible using direct Google networking and it is a huge limitation (GCP Feature Request).
Internal Load Balancer can be accessed perfectly well via VPN tunnel from AWS, but I am not sure that creating such a tunnel between GCP regions under the same network is a good idea.
Workarounds are welcomed!
In the release notes from GCP, it is stated that:
Global access is an optional parameter for internal LoadBalancer Services that allows clients from any region in your VPC to access the internal TCP/UDP Load Balancer IP address.
Global access is enabled per-Service using the following annotation:
networking.gke.io/internal-load-balancer-allow-global-access: "true".
UPDATE: Below service works for GKE v1.16.x & newer versions:
apiVersion: v1
kind: Service
metadata:
name: ilb-global
annotations:
# Required to assign internal IP address
cloud.google.com/load-balancer-type: "Internal"
# Required to enable global access
networking.gke.io/internal-load-balancer-allow-global-access: "true"
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
For GKE v1.15.x and older versions:
Accessing internal load balancer IP from a VM sitting in a different region will not work. But this helped me to make the internal load balancer global.
As we know internal load balancer is nothing but a forwarding rule, we can use gcloud command to enable global access.
Firstly get the internal IP address of the Load Balancer using kubectl and save its IP like below:
# COMMAND:
kubectl get services/ilb-global
# OUTPUT:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ilb-global LoadBalancer 10.0.12.12 10.123.4.5 80:32400/TCP 18m
Note the value of "EXTERNAL-IP" or simply run the below command to make it even simpler:
# COMMAND:
kubectl get service/ilb-global \
-o jsonpath='{.status.loadBalancer.ingress[].ip}'
# OUTPUT:
10.123.4.5
GCP gives a randomly generated ID to the forwarding rule created for this Load Balancer. If you have multiple forwarding rules, use the following command to figure out which one is the internal load balancer you just created:
# COMMAND:
gcloud compute forwarding-rules list | grep 10.123.4.5
# OUTPUT
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
a26cmodifiedb3f8252484ed9d0192 asia-south1 10.123.4.5 TCP asia-south1/backendServices/a26cmodified44904b3f8252484ed9d019
NOTE: If you not working on Linux or grep is not installed, simply run gcloud compute forwarding-rules list and manually look for the forwarding rule having the IP address we are looking for.
Note the name of the forwarding-rule and run the following command to update the forwarding rule with --allow-global-access (remember adding beta, as it is still a beta feature):
# COMMAND:
gcloud beta compute forwarding-rules update a26cmodified904b3f8252484ed9d0192 \
--region asia-south1 --allow-global-access
# OUTPUT:
Updated [https://www.googleapis.com/compute/beta/projects/PROJECT/regions/REGION/forwardingRules/a26hehemodifiedhehe490252484ed9d0192].
And it's done. Now you can access this internal IP (10.123.4.5) from any instance in any region (but the same VPC network).
Another possible way is to implement the ngnix reverser proxy server on an compute engine in the same region as of GKE cluster, and use the internal IP of compute engine instance to communicate with the services of the GKE.
First of all, note that the only way to connect any GCP resource (in this case your GKE cluster) from an on premise location, it’s either through a Cloud Interconnect or VPN set up, which actually they must be in the same region and VPC to be able to communicate with each other.
Having said that, I see you won’t like to do that under the same VPC, therefore a workaround for your scenario could be:
Creating a Service of type LoadBalancer, so your cluster can be reachable through and external (public) IP by exposing this service. If you are worried about the security, you can use Istio to enforce access policies for example.
Or, to create an HTTP(S) load balancing with Ingress, so your cluster can be reachable through its external (public) IP. Where again, for security purposes you can use GCP Cloud Armor which actually so far works only for HTTP(S) Load Balancing.

Wrong IP from GCP kubernetes load balancer to app engine's service

I'm having some troubles with a nginx pod inside a kubernetes cluster located on GCP which should be able to access a service located on app engine.
I have set firewall rules in the app engine to deny all and only allow some ips but the ip which hits my app engine service isn't the IP of the load balancer of my Nginx but instead the IP of one of the node of the cluster.
An image is better than 1000 words, then here's an image of our architecture :
The problem is: The ip which hits app engine's firewall is IP A whereas I thought i'd be IP B. IP A changes everytime I kill/create the cluster. If it were IP B, I could easily open this IP in App engine's firewall rules as I've put her static. Anyone has an idea how to have IP B instead of IP A ?
Thanks
The IP address assigned to your nginx "load balancer" is (likely) not an IP owned or managed by your Kubernetes cluster. Services of type LoadBalancer in GKE use Google Cloud Load Balancers. These are an external abstraction which terminates inbound connections in Google's front-end infrastructure and passes traffic to the individual k8s nodes in the cluster for onward delivery to your k8s-hosted service.
Pods in a Kubernetes cluster will, by default, route egress traffic out of the cluster using the configuration of their host node. In GKE, this route corresponds to the gateway of the VPC in which the cluster (and, by extension, Compute Engine instances) exists. The public IP of cluster nodes will change as they are added and removed from the pool.
A workaround uses a dedicated instance with a static external IP to process egress traffic leaving your VPC (i.e. egress from your cluster). Google has a tutorial for this purpose here: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine
There are k8s-native solutions, but these will be unsuitable in a GKE context at present due to the inability to maintain any node with a non-ephemeral public IP.