GCP kubernetes service subnet cannot be access via AWS VPN - kubernetes

I have kubernetes instance on GCP and have network configuration like this:
instance address range: 10.109.16.0/20,
pods address range: 10.18.0.0/16,
service address range: 10.84.16.0/20
and I have site to site vpn from AWS, I want to access service address range on kubernetes instance from AWS instance via VPN, for pods address is already connected by tested via ICMP, since the service address only open specific port I try curl specific port kubernetes service on aws instance but got timeout error
so why pod address range connected but not for service address range

As per GCP documentation[1],"As with any GKE cluster, Service (ClusterIP) addresses are only available from within the cluster. If you need to access a Kubernetes Service from VM instances outside of the cluster, but within the cluster's VPC network and region, create an internal TCP/UDP load balancer."
If you are creating the service as cluster IP seems like it is not possible. Please create service type load balancer and see if everything is working as expected.
[1]https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#restrictions

Related

How to do client-side load balancing of a GRPC service hosted inside a Kubernetes cluster from outside a Kubernetes cluster?

Scenario: We have a client outside a K8s cluster trying to access a GRPC service hosted inside a K8s cluster. Both the client and the service are part of the same VNET in Azure. We would like to use client-side load balancing for accessing this GRPC service.
Setup of our K8s cluster: Our K8s cluster is hosted inside an Azure VNET and uses Azure CNI networking model, so this means the pods in our cluster have the IP addresses from the VNET's IP address space. Please note we are not using AKS and are self-hosting the K8s cluster, but this whole question should not depend on this in my opinion.
Questions:
We would like to use client-side load balancing for accessing this GRPC service. If both our client and server were present inside the K8s cluster, then we could have used K8s headless service to get list of IP addresses. But in this case since client is outside the K8s cluster, we are looking for solutions on how to retrieve the IP addresses outside the K8s cluster?
Can K8s cluster create DNS records in a DNS server which is hosted outside the K8s cluster so that the client which is outside the K8s cluster can access the list of IP addresses from it?
Thanks for your help!
I found that I could solve the issue by using Extenral DNS. After wiring up my cluster with External DNS (which was linked to a Azure Private DNS zone), I created a headless service and found that on deployment of this service, DNS records were created in the Azure Private DNS zone. I was able to get the list of IP addresses of the pods by just doing a DNS lookup of the service's DNS name.

Dynamic load balancing with kubernetes

I'm new in kubernetes.
We have 50 ip addresses and ip addresses have a request limit. The limit is a value kept in the database. We want load balancer to choose it based on the one that has the most limits in the database. Can Kubernetes do that?
Firstly I advice you to read official documentation about networking in Kubernetes - you can find it here: kubernetes-networking. Especially read about services. Original Load Balancer in Kubernetes never checks application-specific databases.
An abstract way to expose an application running on a set of
Pods as a network service. With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can
load-balance across them.
Example on service type clusterIP.
Kubernetes assigns a stable, reliable IP address to each newly-created
Service (the
ClusterIP) from the cluster's pool of available Service IP addresses. Kubernetes also assigns a hostname to the ClusterIP, by adding a DNS entry. The ClusterIP and hostname are unique within the cluster and do not change throughout the lifecycle of the Service. Kubernetes only
releases the ClusterIP and hostname if the Service is deleted from the cluster's configuration. You can reach a healthy Pod running your application using either the ClusterIP or the hostname of the Service.
Take a look how it looks in GKE: GKE-IP-allocation.
You can specify also your own cluster IP address as part of a
Service creation request - set the .spec.clusterIP field.
The IP address that you choose must be a valid IPv4 or IPv6 address from within the service-cluster-ip-range CIDR range that is configured for the API server. If you try to create a Service with an invalid clusterIP address value, the API server will return a 422 HTTP status code to indicate that there's a problem.
To sum up. Kubernetes load balancer never does deep dive into your app. To connect to your app you need to create service. Kubernetes assigns a stable, reliable IP address to each newly-created Service from which you can access your app within or from outside the cluster. You can also manually assign IP per service.

Kubernetes pod to GCP Compute Engine VM communication

How can I connect to VM's running in GCP compute engine from Kubernetes pod? I have setup a proxy server in Compute Engine and I need to use that from within pods.
This communication needs to be using internal IP. I have allowed firewall rules to allow all internal IP.
Any suggestions on how to connect from pods to gcp vm's?
You can create an internal load balancer in GCP and connect VM or you can use the VPC peering if in a different network.
If your GKE and VM are in the same network you can use the internal IP of your VM to connect with.
From inside of POD you can send curl requests to the VM over internal IP.
OR
If your GKE and GCP VM both are in the different networks you can use the VPC peering to connect both networks and use the internal IP of VM from POD.

Wrong IP from GCP kubernetes load balancer to app engine's service

I'm having some troubles with a nginx pod inside a kubernetes cluster located on GCP which should be able to access a service located on app engine.
I have set firewall rules in the app engine to deny all and only allow some ips but the ip which hits my app engine service isn't the IP of the load balancer of my Nginx but instead the IP of one of the node of the cluster.
An image is better than 1000 words, then here's an image of our architecture :
The problem is: The ip which hits app engine's firewall is IP A whereas I thought i'd be IP B. IP A changes everytime I kill/create the cluster. If it were IP B, I could easily open this IP in App engine's firewall rules as I've put her static. Anyone has an idea how to have IP B instead of IP A ?
Thanks
The IP address assigned to your nginx "load balancer" is (likely) not an IP owned or managed by your Kubernetes cluster. Services of type LoadBalancer in GKE use Google Cloud Load Balancers. These are an external abstraction which terminates inbound connections in Google's front-end infrastructure and passes traffic to the individual k8s nodes in the cluster for onward delivery to your k8s-hosted service.
Pods in a Kubernetes cluster will, by default, route egress traffic out of the cluster using the configuration of their host node. In GKE, this route corresponds to the gateway of the VPC in which the cluster (and, by extension, Compute Engine instances) exists. The public IP of cluster nodes will change as they are added and removed from the pool.
A workaround uses a dedicated instance with a static external IP to process egress traffic leaving your VPC (i.e. egress from your cluster). Google has a tutorial for this purpose here: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine
There are k8s-native solutions, but these will be unsuitable in a GKE context at present due to the inability to maintain any node with a non-ephemeral public IP.

Kubernetes with Google Cloud DNS

Using a Google Container Engine cluster running Kubernetes, what would the process be in order to point http://mydomain.co.uk onto a LoadBalanced ReplicationController?
I'm aware Kubernetes supports SkyDNS - how would I go about delegating Google Cloud DNS for a domain name onto the internal Kubernetes cluster DNS service?
You will need to create a service that maps onto the pods in your replication controller and then expose that service outside of your cluster. You have two options to expose your web service externally:
Set your service to be type: LoadBalancer which will provision a Network load balancer.
Use the ingress support in Kubernetes to create an HTTP(S) load balancer.
The end result of either option is that you will have a public IP address that is routed to the service backed by your replication controller.
Once you have that IP address, you will need to manually configure a DNS record to point your domain name at the IP address.