Peering Connection between GKE and EKS - kubernetes

Is it possible to connect a GKE cluster to a VPC within AWS? For this specific use case, I want the GKE cluster to be able to talk with the EKS cluster behind a VPC in AWS.
I have the CIDR block for my GKE cluster gcloud container clusters describe _cluster_name_ | grep clusterIpv4Cidr
I've already created a VPC and cluster in AWS (i.e. I have a VPC ID for my aws VPC)
Do I need to create a VPC for my GKE cluster in addition to the VPC for my EKS cluster, or do I just need the CIDR range for the GKE cluster for AWS?
Google searching renders very few results for connecting clusters from different providers.

In my opinion, it's possible with VPN connection. At first, I think you should have a look at Kubernetes Engine Communication Through VPN demo. And then, move to the more close example for your case - site-to-site VPN between GCP and AWS. In addition, check some Google Cloud Router documentation and example for some extra information about networking at GKE.

Related

How can I block some public IPs to access my Kubernetes cluster

I have deployed my Kubernetes cluster on AWS EKS and using ingress gateway to block IPs to access my certain services. Is there a way I can block those public IPs to access my Kubernetes cluster from inside my cluster (say using ingress-gateway) if not, then is there a way to white list certain IPs to access cluster from inside the cluster?
I am already aware that the security group of AWS will be able to do this but I want to implement it from the inside of cluster.
You can try using Calico on EKS.
Network policies are similar to AWS security groups in that you can
create network ingress and egress rules. Instead of assigning
instances to a security group, you assign network policies to pods
using pod selectors and labels.

How to do client-side load balancing of a GRPC service hosted inside a Kubernetes cluster from outside a Kubernetes cluster?

Scenario: We have a client outside a K8s cluster trying to access a GRPC service hosted inside a K8s cluster. Both the client and the service are part of the same VNET in Azure. We would like to use client-side load balancing for accessing this GRPC service.
Setup of our K8s cluster: Our K8s cluster is hosted inside an Azure VNET and uses Azure CNI networking model, so this means the pods in our cluster have the IP addresses from the VNET's IP address space. Please note we are not using AKS and are self-hosting the K8s cluster, but this whole question should not depend on this in my opinion.
Questions:
We would like to use client-side load balancing for accessing this GRPC service. If both our client and server were present inside the K8s cluster, then we could have used K8s headless service to get list of IP addresses. But in this case since client is outside the K8s cluster, we are looking for solutions on how to retrieve the IP addresses outside the K8s cluster?
Can K8s cluster create DNS records in a DNS server which is hosted outside the K8s cluster so that the client which is outside the K8s cluster can access the list of IP addresses from it?
Thanks for your help!
I found that I could solve the issue by using Extenral DNS. After wiring up my cluster with External DNS (which was linked to a Azure Private DNS zone), I created a headless service and found that on deployment of this service, DNS records were created in the Azure Private DNS zone. I was able to get the list of IP addresses of the pods by just doing a DNS lookup of the service's DNS name.

Kubernetes pod to GCP Compute Engine VM communication

How can I connect to VM's running in GCP compute engine from Kubernetes pod? I have setup a proxy server in Compute Engine and I need to use that from within pods.
This communication needs to be using internal IP. I have allowed firewall rules to allow all internal IP.
Any suggestions on how to connect from pods to gcp vm's?
You can create an internal load balancer in GCP and connect VM or you can use the VPC peering if in a different network.
If your GKE and VM are in the same network you can use the internal IP of your VM to connect with.
From inside of POD you can send curl requests to the VM over internal IP.
OR
If your GKE and GCP VM both are in the different networks you can use the VPC peering to connect both networks and use the internal IP of VM from POD.

Kubernetes IP conflict

I have the following context: There is a GCP VPC interconnect to the company where are a lot of services with ip address in the range of 10.7.114.0/23. Also in the GCP I have Kubernetes pods that consuming the mentioned on-premise 10.7.114.0/23 services.
The Kubernetes range is 10.194.0.0/19 and when from a pod I want to access the on-premise 10.7.114.0/23 services I've been blocked. The traceroute tracks to the ip 10.194.0.1 and after that is blocked.
Do you have any idea how to solve this problem?

How to IP whitelist a Kubernetes cluster

So I have a kubernetes cluster running in Google Cloud. And from pods inside the cluster I need to access an external DB which has IP whitelisting configured. It seems that I need a static, shared IP for the cluster's outgoing traffic, what's the best approach?
Setting up a service IP seems irrelevant as that's for inbound traffic. I looked into Cloud NAT and it seems promising, but I'm not exactly sure about how to set that up. Any docs/tutorial would be helpful, thanks!
According the docs when traffic goes out of a kubernetes cluster in GKE it will get SNATed with the IP of the node. So you could whitelist the IPs of all GKE kubernetes cluster nodes.
Here is some best practices on connecting to external services from Kubernetes cluster. An example for connecting to Cloud SQL from Google Kubernetes Engine.
An example setup of Cloud NAT on GKE.