Connect external load balancer to the on-premise service fabric cluster - azure-service-fabric

How to connect external load balancer (for example F5) to the Service Fabric on-premise cluster?
It's already done by Azure, since Service Fabric deployed in Azure with Azure LB make round-robin for stateless instances.
Thanks.

Related

How to do client-side load balancing of a GRPC service hosted inside a Kubernetes cluster from outside a Kubernetes cluster?

Scenario: We have a client outside a K8s cluster trying to access a GRPC service hosted inside a K8s cluster. Both the client and the service are part of the same VNET in Azure. We would like to use client-side load balancing for accessing this GRPC service.
Setup of our K8s cluster: Our K8s cluster is hosted inside an Azure VNET and uses Azure CNI networking model, so this means the pods in our cluster have the IP addresses from the VNET's IP address space. Please note we are not using AKS and are self-hosting the K8s cluster, but this whole question should not depend on this in my opinion.
Questions:
We would like to use client-side load balancing for accessing this GRPC service. If both our client and server were present inside the K8s cluster, then we could have used K8s headless service to get list of IP addresses. But in this case since client is outside the K8s cluster, we are looking for solutions on how to retrieve the IP addresses outside the K8s cluster?
Can K8s cluster create DNS records in a DNS server which is hosted outside the K8s cluster so that the client which is outside the K8s cluster can access the list of IP addresses from it?
Thanks for your help!
I found that I could solve the issue by using Extenral DNS. After wiring up my cluster with External DNS (which was linked to a Azure Private DNS zone), I created a headless service and found that on deployment of this service, DNS records were created in the Azure Private DNS zone. I was able to get the list of IP addresses of the pods by just doing a DNS lookup of the service's DNS name.

How to integrate AWS api gateway with EKS cluster to access the microservices deployed on cluster IP using ELB

I have created EKS cluster in that cluster created 2 nodes & deployed few microservices on cluster IP.
As cluster IP is only internally accessible so wanted to configure it with AWS API gateway using ELB.
When you create an ingress in kubernetes, it automatically creates a load balancer for the ingress.
If you are using route 53 as your dns manager then after you have created an ingress you can add an A record to point to the newly created Application Load Balancer.
Please refer to the AWS document here to create ingress controllers:
https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
One solution to the problem is to change the service to a Private NLB using the load balancer controller and then link this nlb to api gateway via VPC link
Further process and documentation for the process can be found here

How Kubernetes balances the load to the eks worker node

I am trying to learn about amazon eks. I have created eks cluster along with the node group. Now i want to balance the load coming to worker nodes. Do i need to explicitly add the load balancer or master control manager will take care of it by itself
Kubernetes comes with kube proxy which provides L4 layer load balancing for replica pods deployed across multiple Kubernetes worker nodes. But if you want to have more sophisticated load balancing you can use an external LoadBalancer.
For load balancing the requests to Kubernets API Serve it's recommended to expose the the API Server endpoints to your clients via a Loadbalancer.

Can we reach a server running inside kubernetes Cluster from Outside?

I have a requirement that the server that is running inside one of my container in a k8s cluster should be able to reach a server that is running in some other machine (currently its in AWS).Now the problem is that both the server (in AWS & Kubernetes Cluster) should be able to reach each other.
My server in AWS is not able to ping my Server running in Kubernetes Cluster.
Is that possible? Can we do it ?
Yes you can use ingress-nginx to create publicly reachable services ingress-nginx
If you want to do it manually you can setup load balancers that map to specific ip ranges for your nodes. This is for ssh traffic.
yes you can use ingress kubernetes object it will create publicly reachable services.
Mainly if you are using aws or digital-ocean and you will use ingress it will make load balancer (ELB or ALB) and make public service and you can access server running inside kubernetes
By manually also you can do it just simply use kubernetes service and expose it using load balancer and NODE port
https://kubernetes.io/docs/concepts/services-networking/service/

How to setup gRPC for AWS Kubernetes services

Any idea how to setup gRPC for Kubernetes exposed services? We have deployed Linkerd in our cluster and that seems to work well and give us gRPC within Kubernetes. However we haven't found a solution that gives us gRPC connectivity from outside the cluster.