In a service mesh architecture the call from service A to service B must happen through a central component? - kubernetes

Let's say we have the following setup:
Service A consists of a pod in a Kubernetes cluster with two containers Api A and Sidecar A. Api A communicates with the outside world through Sidecar A. Sidecar A is registered as a consumer.
Service B consists of a pod in a Kubernetes cluster with two containers Api B and Sidecar B. Api B communicates with the outside world via Sidecar B. Sidecar B is registered as a producer.
Service A and Service B could potentially have multiple instances.
The services register themselves with the service mesh through a central authority, let's call it Service Discovery, that knows about the specific instances of each service and the endpoints that they expose. Service A can also subscribe to a specific endpoint of Service B via this Service Discovery central authority. (The central authority also deals with security, tokens and certificates but I want to simplify)
Sidecar A and Sidecar B regularly communicate with Service Discovery to confirm availability.
How should Service A call an endpoint of Service B:
directly via a specific url because the Sidecar A should know about the instances of Service B via service discovery and should choose a healthy one?
or indirectly by calling a generic api of Service Discovery which should know what are the healthy instances that can be called and redirect the request to one of them accordingly?
or in some other way?

I found out that the recommended way is for service A to call endpoint B directly via a specific URL because the Sidecar A should know about the instances of Service B via service discovery and should choose a healthy instance.
The purpose of service discovery is just that: to allow services to be discoverable. It should not serve as a proxy between calls.

Related

How to implement API routing with istio

My goal is to implement API routing with Istio
Assume that there are 3 services:
Service A
Service B
Service C
and Service A uses Service B.
I want to make make Service A use Service C instead without modifying Service A.
I checked Istio docs for Traffic management, Virtual Services and Destination Rules
Istio doc says
Virtual services also let you:
Address multiple application services through a single virtual service. If your mesh uses Kubernetes, for example, you can configure
a virtual service to handle all services in a specific namespace.
Mapping a single virtual service to multiple “real” services is
particularly useful in facilitating turning a monolithic application
into a composite service built out of distinct microservices without
requiring the consumers of the service to adapt to the transition.
Your routing rules can specify “calls to these URIs of monolith.com go
to microservice A”, and so on. You can see how this works in one of
our examples below.
Configure traffic rules in combination with gateways to control ingress and egress traffic.
And my understanding was that we can use Virtual Service as an abstraction layer to decouple Service A from dependency on Service B as shown below:
/--> Service B
Service A -> Virtual Service -> Destination Rule-> |
\--> Service C
But when I started to implement POC I discovered a problem with that I can not use DNS name of Virtual Service in Service A, because VirtualService by itself does not create any DNS records.
I am confused as to what DNS name I should specify if I do not to be either with Service B or with Service C.
One thought was to create an internal ingress gateway and use its hostname, but is it really necessary? I do not want all traffic in mesh to pass through this gateway as I think it will reduce performance

Pass a dynamically-generated port from one Service to another Service in the same application

I have a Service Fabric Application which consists of two services. Both are stateless services and have a single instance of the services.
In Service A, I would like to define a Endpoint in the Resources section of the ServiceManifest.xml. I don't care what port I get, I just need one and I'd like to get it from Service Fabric so it can ensure it's accessible between the vms managed by the Service Fabric cluster.
In Service B, I'd like to pass the port created for Service A so it can use it to interact with the Service A. I will be defining both services with Service Fabric DNS names, so Service B will know the host of Service A (regardless of where it's running). But Service B also needs to the port that was created for Service A (via it's Endpoint declaration). Is that information passed to the Services? Can it be passed as a parameter, or is there another mechanism.
Thanks for any help
You can discover information about the endpoints of the other service by using the QueryManager on the FabricClient. Example here.
Out of curiosity, can't you use SF remoting for this?

K8 LB Networking

I understand what the Loadbalancer service type does. i.e it creates spins up a LB instance in your cloud instance, NodePorts are created and traffic is sent to the VIP onto the NodePorts.
However, how does this actually work in terms of kubectl and the LB spin up. Is this a construct within the CNI? What part of K8 sends the request and instructs the cloud provider to create the LB?
Thanks,
In this case the CloudControllerManager is responsible for the creation. The CloudControllerManager contains a ServiceController that listens to Service Create/Update/Delete events and triggers the creation of a LoadBalancer based on the configuration of the Service.
In general in Kubernetes you have the concept of declaratively creating a Resource (such as a Service), of which the state is stored in State Storage (etcd in Kubernetes). The controllers are responsible for making sure that that state is realised. In this case the state is realised by creating a Load Balancer in a cloud provider and pointing it to the Kubernetes Cluster.

How to discover services deployed on kubernetes from the outside?

The User Microservice is deployed on kubernetes.
The Order Microservice is not deployed on kubernetes, but registered with Eureka.
My questions:
How can Order Microservice discover and access User Microservice through the Eureka??
First lets take a look at the problem itself:
If you use an overlay network as Kubernetes CNI, the problem is that it creates an isolated Network thats not reachable from the outside (e.g. Flannel). If you have a network like that one solution would be to move the eureka server into kubernetes so eureka can reach the service in Kubernetes and the service outside of Kubernetes.
Another solution would be to tell eureka where it can find the service instead of auto discovery but for that you also need to make the service externally available with a Service of type NodePort, HostPort or LoadBalancer or with an ingress and I'm not sure its possible, but 11.2 in the following doc could be worth a look Eureka Client Discovery.
The third solution would be to use a CNI thats not using an overlay network like Romana which will make the service external routable by default.

Fabric Service availability on start

I have a scenario where one of our services exposes WCF hosts that receive callbacks from an external service.
These hosts are dynamically created and there may be hundreds of them. I need to ensure that they are all up and running on the node before the node starts receiving requests so they don't receive failures, this is critical.
Is there a way to ensure that the service doesn't receive requests until I say it's ready? In cloud services I could do this by containing all this code within the OnStart method.
My initial thought is that I might be able to bootstrap this before I open the communication listener - in the hope that the fabric manager only sends requests once this has been done, but I can't find any information on how this lifetime is handled.
There's no "fabric manager" that controls network traffic between your services within the cluster. If your service is up, clients or other services inside the cluster can choose to try to connect to it if they know its address. With that in mind, there are two things you have control over here:
The first is whether or not your service's endpoint is discoverable by other services or clients. This is the point at which your service endpoint is registered with Service Fabric's Naming Service, which occurs when your ICommunicationListener.OpenAsync method returns. At that point, the service endpoint is registered and others can discover it and attempt to connect to it. Of course you don't have to use the Naming Service or the ICommunicationListener pattern if you don't want to; your service can open up an endpoint whenever it feels like it, but if you don't register it with the Naming Service, you'll have to come up with your own service discovery mechanism.
The second is whether or not the node on which your service is running is receiving traffic from the Azure Load Balancer (or any load balancer if you're not hosting in Azure). This has less to do with Service Fabric and more to do with the load balancer itself. In Azure, you can use a load balancer probe to determine whether or not traffic should be sent to nodes.
EDIT:
I added some info about the Azure Load Balancer to our documentation, hope this helps: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/