Kubernetes IP conflict - kubernetes

I have the following context: There is a GCP VPC interconnect to the company where are a lot of services with ip address in the range of 10.7.114.0/23. Also in the GCP I have Kubernetes pods that consuming the mentioned on-premise 10.7.114.0/23 services.
The Kubernetes range is 10.194.0.0/19 and when from a pod I want to access the on-premise 10.7.114.0/23 services I've been blocked. The traceroute tracks to the ip 10.194.0.1 and after that is blocked.
Do you have any idea how to solve this problem?

Related

How to do client-side load balancing of a GRPC service hosted inside a Kubernetes cluster from outside a Kubernetes cluster?

Scenario: We have a client outside a K8s cluster trying to access a GRPC service hosted inside a K8s cluster. Both the client and the service are part of the same VNET in Azure. We would like to use client-side load balancing for accessing this GRPC service.
Setup of our K8s cluster: Our K8s cluster is hosted inside an Azure VNET and uses Azure CNI networking model, so this means the pods in our cluster have the IP addresses from the VNET's IP address space. Please note we are not using AKS and are self-hosting the K8s cluster, but this whole question should not depend on this in my opinion.
Questions:
We would like to use client-side load balancing for accessing this GRPC service. If both our client and server were present inside the K8s cluster, then we could have used K8s headless service to get list of IP addresses. But in this case since client is outside the K8s cluster, we are looking for solutions on how to retrieve the IP addresses outside the K8s cluster?
Can K8s cluster create DNS records in a DNS server which is hosted outside the K8s cluster so that the client which is outside the K8s cluster can access the list of IP addresses from it?
Thanks for your help!
I found that I could solve the issue by using Extenral DNS. After wiring up my cluster with External DNS (which was linked to a Azure Private DNS zone), I created a headless service and found that on deployment of this service, DNS records were created in the Azure Private DNS zone. I was able to get the list of IP addresses of the pods by just doing a DNS lookup of the service's DNS name.

GCP kubernetes service subnet cannot be access via AWS VPN

I have kubernetes instance on GCP and have network configuration like this:
instance address range: 10.109.16.0/20,
pods address range: 10.18.0.0/16,
service address range: 10.84.16.0/20
and I have site to site vpn from AWS, I want to access service address range on kubernetes instance from AWS instance via VPN, for pods address is already connected by tested via ICMP, since the service address only open specific port I try curl specific port kubernetes service on aws instance but got timeout error
so why pod address range connected but not for service address range
As per GCP documentation[1],"As with any GKE cluster, Service (ClusterIP) addresses are only available from within the cluster. If you need to access a Kubernetes Service from VM instances outside of the cluster, but within the cluster's VPC network and region, create an internal TCP/UDP load balancer."
If you are creating the service as cluster IP seems like it is not possible. Please create service type load balancer and see if everything is working as expected.
[1]https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#restrictions

GKE cluster egress traffic coming out the nodes rather than the LB service

I'm new to GKE and K8S so please bare with me and my silliness. I currently have a GKE cluster that has two nodes in the default node pool and the cluster is exposed via a LoadBalancer type service.
These nodes are tasked with calling a Compute Engine instance via HTTP. I have a Firewall rule set in GCP to deny ingress traffic to the GCE instance except the one coming from the GKE cluster.
The issue is that the traffic isn't coming from the LoadBalancer's service IP but rather from the nodes themselves, so whitelisting the services' IP has no effect and I have to whitelist the IPs of the nodes instead of the cluster. This is not ideal, since each time a new node is created I have to change the Firewall rule. I understand that once you have a service set up in the cluster, all traffic will be directed towards the IP of the service, so why is this happening? What am I doing wrong? Please let me know if you need more details and thanks in advance.
YAML of the service:
https://i.stack.imgur.com/XBZmE.png
When you create a service on GKE, and you expose it to internet, a load balancer is created. This load balancer manage only the ingress traffic (traffic from internet to your GKE cluster).
When your pod initiate a communication, the traffic is not managed by the load balancer, but by the node that host the pod, if the node have a public IP (Instead of denied the traffic to GCE instance, simply remove the public IP, it's easier and safer!).
If you want to manage the IP for egress traffic originated by your pod, you have to set up a Cloud NAT on your GKE cluster.

How to IP whitelist a Kubernetes cluster

So I have a kubernetes cluster running in Google Cloud. And from pods inside the cluster I need to access an external DB which has IP whitelisting configured. It seems that I need a static, shared IP for the cluster's outgoing traffic, what's the best approach?
Setting up a service IP seems irrelevant as that's for inbound traffic. I looked into Cloud NAT and it seems promising, but I'm not exactly sure about how to set that up. Any docs/tutorial would be helpful, thanks!
According the docs when traffic goes out of a kubernetes cluster in GKE it will get SNATed with the IP of the node. So you could whitelist the IPs of all GKE kubernetes cluster nodes.
Here is some best practices on connecting to external services from Kubernetes cluster. An example for connecting to Cloud SQL from Google Kubernetes Engine.
An example setup of Cloud NAT on GKE.

Wrong IP from GCP kubernetes load balancer to app engine's service

I'm having some troubles with a nginx pod inside a kubernetes cluster located on GCP which should be able to access a service located on app engine.
I have set firewall rules in the app engine to deny all and only allow some ips but the ip which hits my app engine service isn't the IP of the load balancer of my Nginx but instead the IP of one of the node of the cluster.
An image is better than 1000 words, then here's an image of our architecture :
The problem is: The ip which hits app engine's firewall is IP A whereas I thought i'd be IP B. IP A changes everytime I kill/create the cluster. If it were IP B, I could easily open this IP in App engine's firewall rules as I've put her static. Anyone has an idea how to have IP B instead of IP A ?
Thanks
The IP address assigned to your nginx "load balancer" is (likely) not an IP owned or managed by your Kubernetes cluster. Services of type LoadBalancer in GKE use Google Cloud Load Balancers. These are an external abstraction which terminates inbound connections in Google's front-end infrastructure and passes traffic to the individual k8s nodes in the cluster for onward delivery to your k8s-hosted service.
Pods in a Kubernetes cluster will, by default, route egress traffic out of the cluster using the configuration of their host node. In GKE, this route corresponds to the gateway of the VPC in which the cluster (and, by extension, Compute Engine instances) exists. The public IP of cluster nodes will change as they are added and removed from the pool.
A workaround uses a dedicated instance with a static external IP to process egress traffic leaving your VPC (i.e. egress from your cluster). Google has a tutorial for this purpose here: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine
There are k8s-native solutions, but these will be unsuitable in a GKE context at present due to the inability to maintain any node with a non-ephemeral public IP.