How to do client-side load balancing of a GRPC service hosted inside a Kubernetes cluster from outside a Kubernetes cluster? - kubernetes

Scenario: We have a client outside a K8s cluster trying to access a GRPC service hosted inside a K8s cluster. Both the client and the service are part of the same VNET in Azure. We would like to use client-side load balancing for accessing this GRPC service.
Setup of our K8s cluster: Our K8s cluster is hosted inside an Azure VNET and uses Azure CNI networking model, so this means the pods in our cluster have the IP addresses from the VNET's IP address space. Please note we are not using AKS and are self-hosting the K8s cluster, but this whole question should not depend on this in my opinion.
Questions:
We would like to use client-side load balancing for accessing this GRPC service. If both our client and server were present inside the K8s cluster, then we could have used K8s headless service to get list of IP addresses. But in this case since client is outside the K8s cluster, we are looking for solutions on how to retrieve the IP addresses outside the K8s cluster?
Can K8s cluster create DNS records in a DNS server which is hosted outside the K8s cluster so that the client which is outside the K8s cluster can access the list of IP addresses from it?
Thanks for your help!

I found that I could solve the issue by using Extenral DNS. After wiring up my cluster with External DNS (which was linked to a Azure Private DNS zone), I created a headless service and found that on deployment of this service, DNS records were created in the Azure Private DNS zone. I was able to get the list of IP addresses of the pods by just doing a DNS lookup of the service's DNS name.

Related

GCP kubernetes service subnet cannot be access via AWS VPN

I have kubernetes instance on GCP and have network configuration like this:
instance address range: 10.109.16.0/20,
pods address range: 10.18.0.0/16,
service address range: 10.84.16.0/20
and I have site to site vpn from AWS, I want to access service address range on kubernetes instance from AWS instance via VPN, for pods address is already connected by tested via ICMP, since the service address only open specific port I try curl specific port kubernetes service on aws instance but got timeout error
so why pod address range connected but not for service address range
As per GCP documentation[1],"As with any GKE cluster, Service (ClusterIP) addresses are only available from within the cluster. If you need to access a Kubernetes Service from VM instances outside of the cluster, but within the cluster's VPC network and region, create an internal TCP/UDP load balancer."
If you are creating the service as cluster IP seems like it is not possible. Please create service type load balancer and see if everything is working as expected.
[1]https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#restrictions

Service external IP pending on kubernetes hosted on jelastic

I have installed my kubernetes cluster on Jelastic. Now, I tried to define a service of LoadBalancer type and would like it to be provided with an external IP. The external IP is currently marked as pending. What should I do to make it non-pending? Do I have to provide the worker nodes with an external IPv4?
In my current setup, my worker nodes have no IPv4 because I put an nginx load-balancer in front of the cluster:
The IPv4 is set on the nginx node. Is that a problem? If I want to access my loadbalancer service inside of my kubernetes cluster, what should I do?
For LoadBalancer service type to work, the cloud provider must implemenet the relevant APIs to get it to work.
With regard to Jelastic, as per their docs, they don't support it https://docs.jelastic.com/kubernetes-exposing-services/:
Jelastic PaaS does not support the LocaBalancer service type currently.
In Jelastic Public IP addresses have to be attached to worker nodes.
Every worker node has ingress controller instance running (based oт nginx/haproxy/traefik) with http/https listeners that can forward traffic to the required service.
You have just to bind your domain as CNAME to Environment FQDN and every your worker node can accept requests in RR-DNS mode.
Does this scenario works for you or you have a specific requirement to use external load balancer?
By default, when Public IPs are not attached to worker instances the traffic is going through the Shared Load Balancer.
P.S. If you install Certification Manager Addon to your K8s cluster - you can also issue free Let's Encrypt certificates.

How to IP whitelist a Kubernetes cluster

So I have a kubernetes cluster running in Google Cloud. And from pods inside the cluster I need to access an external DB which has IP whitelisting configured. It seems that I need a static, shared IP for the cluster's outgoing traffic, what's the best approach?
Setting up a service IP seems irrelevant as that's for inbound traffic. I looked into Cloud NAT and it seems promising, but I'm not exactly sure about how to set that up. Any docs/tutorial would be helpful, thanks!
According the docs when traffic goes out of a kubernetes cluster in GKE it will get SNATed with the IP of the node. So you could whitelist the IPs of all GKE kubernetes cluster nodes.
Here is some best practices on connecting to external services from Kubernetes cluster. An example for connecting to Cloud SQL from Google Kubernetes Engine.
An example setup of Cloud NAT on GKE.

How to access services in a different kubernetes cluster using DNS

How to access services in a different kubernetes cluster using DNS and not willing to use External ingress.
With in the cluster I am using internal DNS ex: mysvc.mynamespace.
How i can achieve the same from one Cluster to another.
You can explore KubeFed which allows cross cluster service discovery

Kubernetes with Google Cloud DNS

Using a Google Container Engine cluster running Kubernetes, what would the process be in order to point http://mydomain.co.uk onto a LoadBalanced ReplicationController?
I'm aware Kubernetes supports SkyDNS - how would I go about delegating Google Cloud DNS for a domain name onto the internal Kubernetes cluster DNS service?
You will need to create a service that maps onto the pods in your replication controller and then expose that service outside of your cluster. You have two options to expose your web service externally:
Set your service to be type: LoadBalancer which will provision a Network load balancer.
Use the ingress support in Kubernetes to create an HTTP(S) load balancer.
The end result of either option is that you will have a public IP address that is routed to the service backed by your replication controller.
Once you have that IP address, you will need to manually configure a DNS record to point your domain name at the IP address.