Kubernetes pod to GCP Compute Engine VM communication - kubernetes

How can I connect to VM's running in GCP compute engine from Kubernetes pod? I have setup a proxy server in Compute Engine and I need to use that from within pods.
This communication needs to be using internal IP. I have allowed firewall rules to allow all internal IP.
Any suggestions on how to connect from pods to gcp vm's?

You can create an internal load balancer in GCP and connect VM or you can use the VPC peering if in a different network.
If your GKE and VM are in the same network you can use the internal IP of your VM to connect with.
From inside of POD you can send curl requests to the VM over internal IP.
OR
If your GKE and GCP VM both are in the different networks you can use the VPC peering to connect both networks and use the internal IP of VM from POD.

Related

How to do client-side load balancing of a GRPC service hosted inside a Kubernetes cluster from outside a Kubernetes cluster?

Scenario: We have a client outside a K8s cluster trying to access a GRPC service hosted inside a K8s cluster. Both the client and the service are part of the same VNET in Azure. We would like to use client-side load balancing for accessing this GRPC service.
Setup of our K8s cluster: Our K8s cluster is hosted inside an Azure VNET and uses Azure CNI networking model, so this means the pods in our cluster have the IP addresses from the VNET's IP address space. Please note we are not using AKS and are self-hosting the K8s cluster, but this whole question should not depend on this in my opinion.
Questions:
We would like to use client-side load balancing for accessing this GRPC service. If both our client and server were present inside the K8s cluster, then we could have used K8s headless service to get list of IP addresses. But in this case since client is outside the K8s cluster, we are looking for solutions on how to retrieve the IP addresses outside the K8s cluster?
Can K8s cluster create DNS records in a DNS server which is hosted outside the K8s cluster so that the client which is outside the K8s cluster can access the list of IP addresses from it?
Thanks for your help!
I found that I could solve the issue by using Extenral DNS. After wiring up my cluster with External DNS (which was linked to a Azure Private DNS zone), I created a headless service and found that on deployment of this service, DNS records were created in the Azure Private DNS zone. I was able to get the list of IP addresses of the pods by just doing a DNS lookup of the service's DNS name.

GCP kubernetes service subnet cannot be access via AWS VPN

I have kubernetes instance on GCP and have network configuration like this:
instance address range: 10.109.16.0/20,
pods address range: 10.18.0.0/16,
service address range: 10.84.16.0/20
and I have site to site vpn from AWS, I want to access service address range on kubernetes instance from AWS instance via VPN, for pods address is already connected by tested via ICMP, since the service address only open specific port I try curl specific port kubernetes service on aws instance but got timeout error
so why pod address range connected but not for service address range
As per GCP documentation[1],"As with any GKE cluster, Service (ClusterIP) addresses are only available from within the cluster. If you need to access a Kubernetes Service from VM instances outside of the cluster, but within the cluster's VPC network and region, create an internal TCP/UDP load balancer."
If you are creating the service as cluster IP seems like it is not possible. Please create service type load balancer and see if everything is working as expected.
[1]https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#restrictions

Kubernetes IP conflict

I have the following context: There is a GCP VPC interconnect to the company where are a lot of services with ip address in the range of 10.7.114.0/23. Also in the GCP I have Kubernetes pods that consuming the mentioned on-premise 10.7.114.0/23 services.
The Kubernetes range is 10.194.0.0/19 and when from a pod I want to access the on-premise 10.7.114.0/23 services I've been blocked. The traceroute tracks to the ip 10.194.0.1 and after that is blocked.
Do you have any idea how to solve this problem?

How to IP whitelist a Kubernetes cluster

So I have a kubernetes cluster running in Google Cloud. And from pods inside the cluster I need to access an external DB which has IP whitelisting configured. It seems that I need a static, shared IP for the cluster's outgoing traffic, what's the best approach?
Setting up a service IP seems irrelevant as that's for inbound traffic. I looked into Cloud NAT and it seems promising, but I'm not exactly sure about how to set that up. Any docs/tutorial would be helpful, thanks!
According the docs when traffic goes out of a kubernetes cluster in GKE it will get SNATed with the IP of the node. So you could whitelist the IPs of all GKE kubernetes cluster nodes.
Here is some best practices on connecting to external services from Kubernetes cluster. An example for connecting to Cloud SQL from Google Kubernetes Engine.
An example setup of Cloud NAT on GKE.

Wrong IP from GCP kubernetes load balancer to app engine's service

I'm having some troubles with a nginx pod inside a kubernetes cluster located on GCP which should be able to access a service located on app engine.
I have set firewall rules in the app engine to deny all and only allow some ips but the ip which hits my app engine service isn't the IP of the load balancer of my Nginx but instead the IP of one of the node of the cluster.
An image is better than 1000 words, then here's an image of our architecture :
The problem is: The ip which hits app engine's firewall is IP A whereas I thought i'd be IP B. IP A changes everytime I kill/create the cluster. If it were IP B, I could easily open this IP in App engine's firewall rules as I've put her static. Anyone has an idea how to have IP B instead of IP A ?
Thanks
The IP address assigned to your nginx "load balancer" is (likely) not an IP owned or managed by your Kubernetes cluster. Services of type LoadBalancer in GKE use Google Cloud Load Balancers. These are an external abstraction which terminates inbound connections in Google's front-end infrastructure and passes traffic to the individual k8s nodes in the cluster for onward delivery to your k8s-hosted service.
Pods in a Kubernetes cluster will, by default, route egress traffic out of the cluster using the configuration of their host node. In GKE, this route corresponds to the gateway of the VPC in which the cluster (and, by extension, Compute Engine instances) exists. The public IP of cluster nodes will change as they are added and removed from the pool.
A workaround uses a dedicated instance with a static external IP to process egress traffic leaving your VPC (i.e. egress from your cluster). Google has a tutorial for this purpose here: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine
There are k8s-native solutions, but these will be unsuitable in a GKE context at present due to the inability to maintain any node with a non-ephemeral public IP.