K8 LB Networking - kubernetes

I understand what the Loadbalancer service type does. i.e it creates spins up a LB instance in your cloud instance, NodePorts are created and traffic is sent to the VIP onto the NodePorts.
However, how does this actually work in terms of kubectl and the LB spin up. Is this a construct within the CNI? What part of K8 sends the request and instructs the cloud provider to create the LB?
Thanks,

In this case the CloudControllerManager is responsible for the creation. The CloudControllerManager contains a ServiceController that listens to Service Create/Update/Delete events and triggers the creation of a LoadBalancer based on the configuration of the Service.
In general in Kubernetes you have the concept of declaratively creating a Resource (such as a Service), of which the state is stored in State Storage (etcd in Kubernetes). The controllers are responsible for making sure that that state is realised. In this case the state is realised by creating a Load Balancer in a cloud provider and pointing it to the Kubernetes Cluster.

Related

How to deploy kubernertes service (type LoadBalancer) on onprem VMs?

How to deploy kubernertes service (type LoadBalancer) on onprem VMs ? When I using type=LoadBalcer it's shows external IP as "pending" but everything works fine with the same yaml if I deployed on GKS. My question is-:
Do we need a Load balancer if I use type=LoadBalcer on Onprem VMs?
Can I assign LoadBalncer IP manually in yaml?
You need to setup metalLB.
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
To install run
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
For more details Click here to install
It might be helpful to check the Banzai Cloud Pipeline Kubernetes Engine (PKE) that is "a simple, secure and powerful CNCF-certified Kubernetes distribution" platform. It was designed to work on any cloud, VM or on bare metal nodes to provide a scalable and secure foundation for private clouds. PKE is cloud-aware and includes an ever-increasing number of cloud and platform integrations.
When I using type=LoadBalcer it's shows external IP as "pending" but everything works fine with the same yaml if I deployed on GKS.
If you create a LoadBalancer service — for example try to expose your own TCP based service, or install an ingress controller — the cloud provider integration will take care of creating the needed cloud resources, and writing back the endpoint where your service will be available. If you don't have a cloud provider integration or a controller for this purpose, your Service resource will remain in Pending state.
In case of Kubernetes, LoadBalancer services are the easiest and most common way to expose a service (redundant or not) for the world outside of the cluster or the mesh — to other services, to internal users, or to the internet.
Load balancing as a concept can happen on different levels of the OSI network model, mainly on L4 (transport layer, for example TCP) and L7 (application layer, for example HTTP). In Kubernetes, Services are an abstraction for L4, while Ingresses are a generic solution for L7 routing.
You need to setup metalLB.
MetalLB is one of the most popular on-prem replacements for LoadBalancer cloud integrations. The whole solution runs inside the Kubernetes cluster.
The main component is an in-cluster Kubernetes controller which watches LB service resources, and based on the configuration supplied in a ConfigMap, allocates and writes back IP addresses from a dedicated pool for new services. It maintains a leader node for each service, and depending on the working mode, advertises it via BGP or ARP (sending out unsolicited ARP packets in case of failovers).
MetalLB can operate in two ways: either all requests are forwarded to pods on the leader node, or distributed to all nodes with kubeproxy.
Layer 7 (usually HTTP/HTTPS) load balancer appliances like F5 BIG-IP, or HAProxy and Nginx based solutions may be integrated with an applicable ingress-controller. If you have such, you won't need a LoadBalancer implementation in most cases.
Hope that sheds some light on a "LoadBalancer on bare metal hosts" question.

Amazon ECS service access and load balancing in microservice architecture

Can someone explain the load balancing mechanism in AWS ECS for me? I clearly understand how inter service communication is handled within a kubernetes cluster, there is an automatic load balancer applied when accessing a defined internal service. This means Container/Pod scalability is simply predefined:
when a Pod-1A from within the service-A is accessing another Pod-1B
from within a different Service-B (Service to Service communication)
this call is automatically load balanced to this Pod-1B from
service-B.
So with service Registry in kubernetes we simply need to define Services and communication is automatically load balanced to the available Pods within the services.
So assuming that Pods are equal to Tasks and Services are equal to Services in AWS ECS, how is this load balancing mechanism handled wihtin ECS? Do we really need to apply an Elastic Load balancer at the task/pod level manually compared to kubernetes? (So that we need to define manually a load balancer for every service, to make this service and its tasks with its container scalable?)
Edit:
What is the reason in AWS ECS, to define a service which instantiates
multiple replicas of a Task, when no load balancer has been defined?
Will the traffic be routed only to the same Task replica (Container)
all the time? (No scaling at all?)
Please note, this is not about access from external ip addresses, where an ingress controller is needed. I am talking about microservices where each service exposes its own http api to communicate with other services within the cluster (internal microservice Application), typically there is an API Gateway handling external traffic (ingress controller).

How to discover services deployed on kubernetes from the outside?

The User Microservice is deployed on kubernetes.
The Order Microservice is not deployed on kubernetes, but registered with Eureka.
My questions:
How can Order Microservice discover and access User Microservice through the Eureka??
First lets take a look at the problem itself:
If you use an overlay network as Kubernetes CNI, the problem is that it creates an isolated Network thats not reachable from the outside (e.g. Flannel). If you have a network like that one solution would be to move the eureka server into kubernetes so eureka can reach the service in Kubernetes and the service outside of Kubernetes.
Another solution would be to tell eureka where it can find the service instead of auto discovery but for that you also need to make the service externally available with a Service of type NodePort, HostPort or LoadBalancer or with an ingress and I'm not sure its possible, but 11.2 in the following doc could be worth a look Eureka Client Discovery.
The third solution would be to use a CNI thats not using an overlay network like Romana which will make the service external routable by default.

Access a specific pod from external

We have an old service discovery system that requires processes to register its ip:port during startup. On a kubernetes cluster, we exposed a service that enables NodePort. The processes within container can register to the old system with their Pod Ip:port + HostIp. For the clients within the same kubernetes cluster, they should be able to connect to the right process via specific Pod Ip:port. For an external client, it knows the HostIp+NodePort and the specific Pod Ip:port, is there an efficient way to route the client’s request to the specific Pod? Running a proxy on each node to route the traffic (nodeport -> pod) seems inefficient due to additional proxy layer.
I guess you mean you don't want to add a Service of type NodePort as for your case that seems like an additional proxy layer. I can see how it is an additional layer in your case. Typically Kubernetes would be doing the orchestration and the Service would be part of the service-discovery mechanism. It sounds like you could use hostPort. But if you do go this route you should be aware it's not suggested practice as Kubernetes is intended for orchestration.

how kubernetes service works?

Of all the concepts from Kubernetes, I find service working mechanism is the most difficult to understand
Here is what I imagine right now:
kube-proxy in each node listen to any new service/endpoint in master API controller
If there is any new service/endpoint, it adds a rule to that node's iptables
For NodePort service, external client has to access new service through one of the node's ip and NodePort. The node will forward the request to the new service IP
Is it correct? There are still a few things I'm still not clear:
Are services lying within nodes? If so, can we ssh into nodes and inspect how services work?
Are service IPs virtual IPs and only accessible within nodes?
Most of the diagrams that I see online draw services as crossing all nodes, which make it even more difficult to imagine
kube-proxy in each node listen to any new service/endpoint in master API controller
Kubernetes uses etcd to share the current cluster configuration information across all nodes (including pods, services, deployments, etc.).
If there is any new service/endpoint, it adds a rule to that node's iptables
Internally kubernetes has a so called Endpoint Controller that is responsible for modifying the DNS configuration of the virtual cluster network to make service endpoints available via DNS (and environment variables).
For NodePort service, external client has to access new service through one of the node's ip and NodePort. The node will forward the request to the new service IP
Depending on the service type additional action is taken, e.g. to make a port available on the nodes through an automatically created clusterIP service for type nodePort. Or an external load balancer is created with the cloud provider, etc.
Are services lying within nodes? If so, can we ssh into nodes and inspect how services work?
As explained, services are manifested in the cluster configuration, the endpoint controller as well as additional things, like the clusterIP services, load balancers, etc. I cannot see a need to ssh into nodes to inspect services. Typically interacting with the cluster api should be sufficient to investigate/update the service configuration.
Are service IPs virtual IPs and only accessible within nodes?
Service IPs, like POD IPs are virtual and accessible from within the cluster network. There is a global allocation map in etcd that maintains the complete list that allows allocating unique new ones. For more information on the networking model read this blog.
For more detailed information see the docs for kubernetes components and services.