Pass a dynamically-generated port from one Service to another Service in the same application - azure-service-fabric

I have a Service Fabric Application which consists of two services. Both are stateless services and have a single instance of the services.
In Service A, I would like to define a Endpoint in the Resources section of the ServiceManifest.xml. I don't care what port I get, I just need one and I'd like to get it from Service Fabric so it can ensure it's accessible between the vms managed by the Service Fabric cluster.
In Service B, I'd like to pass the port created for Service A so it can use it to interact with the Service A. I will be defining both services with Service Fabric DNS names, so Service B will know the host of Service A (regardless of where it's running). But Service B also needs to the port that was created for Service A (via it's Endpoint declaration). Is that information passed to the Services? Can it be passed as a parameter, or is there another mechanism.
Thanks for any help

You can discover information about the endpoints of the other service by using the QueryManager on the FabricClient. Example here.
Out of curiosity, can't you use SF remoting for this?

Related

How to implement API routing with istio

My goal is to implement API routing with Istio
Assume that there are 3 services:
Service A
Service B
Service C
and Service A uses Service B.
I want to make make Service A use Service C instead without modifying Service A.
I checked Istio docs for Traffic management, Virtual Services and Destination Rules
Istio doc says
Virtual services also let you:
Address multiple application services through a single virtual service. If your mesh uses Kubernetes, for example, you can configure
a virtual service to handle all services in a specific namespace.
Mapping a single virtual service to multiple “real” services is
particularly useful in facilitating turning a monolithic application
into a composite service built out of distinct microservices without
requiring the consumers of the service to adapt to the transition.
Your routing rules can specify “calls to these URIs of monolith.com go
to microservice A”, and so on. You can see how this works in one of
our examples below.
Configure traffic rules in combination with gateways to control ingress and egress traffic.
And my understanding was that we can use Virtual Service as an abstraction layer to decouple Service A from dependency on Service B as shown below:
/--> Service B
Service A -> Virtual Service -> Destination Rule-> |
\--> Service C
But when I started to implement POC I discovered a problem with that I can not use DNS name of Virtual Service in Service A, because VirtualService by itself does not create any DNS records.
I am confused as to what DNS name I should specify if I do not to be either with Service B or with Service C.
One thought was to create an internal ingress gateway and use its hostname, but is it really necessary? I do not want all traffic in mesh to pass through this gateway as I think it will reduce performance

In a service mesh architecture the call from service A to service B must happen through a central component?

Let's say we have the following setup:
Service A consists of a pod in a Kubernetes cluster with two containers Api A and Sidecar A. Api A communicates with the outside world through Sidecar A. Sidecar A is registered as a consumer.
Service B consists of a pod in a Kubernetes cluster with two containers Api B and Sidecar B. Api B communicates with the outside world via Sidecar B. Sidecar B is registered as a producer.
Service A and Service B could potentially have multiple instances.
The services register themselves with the service mesh through a central authority, let's call it Service Discovery, that knows about the specific instances of each service and the endpoints that they expose. Service A can also subscribe to a specific endpoint of Service B via this Service Discovery central authority. (The central authority also deals with security, tokens and certificates but I want to simplify)
Sidecar A and Sidecar B regularly communicate with Service Discovery to confirm availability.
How should Service A call an endpoint of Service B:
directly via a specific url because the Sidecar A should know about the instances of Service B via service discovery and should choose a healthy one?
or indirectly by calling a generic api of Service Discovery which should know what are the healthy instances that can be called and redirect the request to one of them accordingly?
or in some other way?
I found out that the recommended way is for service A to call endpoint B directly via a specific URL because the Sidecar A should know about the instances of Service B via service discovery and should choose a healthy instance.
The purpose of service discovery is just that: to allow services to be discoverable. It should not serve as a proxy between calls.

Change Kubernetes Service Name Without Removing It

Suppose in my microservice architecture, I have a microservice that receives API calls, and sends the required RPCs to other microservices in order to respond the calls. Let's call it server.
In order to be exposed to outside world, I have a NodePort Service for this microservice named after its name (server).
Currently I am using RabbitMQ for my inter-service communications, and server is talking to other microservices via RMQ queues.
Now I want to deploy a service mesh and use gRPC for inter-service communications. So I need to create K8s Service for gRPC port for all of my microservices with their microservice name (including server). However, the K8s Service with name server already exists and I need to change the name of that NodePort in order to be able to create its gRPC Service, but K8s doesn't let me change the Service name. If I delete the NodePort and create another one with a new name, my application would be down for that couple of seconds.
Final question is, how can I achieve renaming this NodePort while having my application available to users?
You can do the following:
Create a brand new NodePort service "server-renamed" (with the same selectors and everything as "server")
Change your microservices config to use it and check all is OK
Remove the "server" service and recreate it with the new required specs.

Do we need External Endpoints for orchestration micro services

I have a question about following architecture, I could not find a clear cut answer in the Kubernetes documentation may be you can help me.
I have a service called 'OrchestrationService' this service is dependant to 3 other services 'ServiceA', 'ServiceB', 'ServiceC' to be able to do its job.
All these services have their Docker Images and deployed to Kubernetes.
Now, the 'OrchestrationService' will be the only one that is going to have a contact with outside world so it would definitely have an external endpoint, my question is does 'ServiceA', 'ServiceB', 'ServiceC' would need one or Kubernetes would make those services available for 'OrchestrationService' via KubeProxy/LoadBalancer?
Thx for answers
No, you only expose OrchestrationService to public and other service A/B/C need to be cluster services. You create selector services for A/B/C so OrchestrationService can connect to A/B/C services. OrchestrationService can be defined as NodePort with fixed port or you can use ingress to route traffic to OrchestrationService.
No you dont need external end points for ServiceA,ServiceB and ServiceC.
If these pods are running successfully depending upon your labels you can access this in OrchestrationService you can refer those by saying
http://servicea/context_path
servicea in the url is the label in the defined service for ServiceA
Not as external services like loadbalancer but your services A/B/C need to publish themselves as services inside cluster so that other services like OrchestrationService can use them.

Fabric Service availability on start

I have a scenario where one of our services exposes WCF hosts that receive callbacks from an external service.
These hosts are dynamically created and there may be hundreds of them. I need to ensure that they are all up and running on the node before the node starts receiving requests so they don't receive failures, this is critical.
Is there a way to ensure that the service doesn't receive requests until I say it's ready? In cloud services I could do this by containing all this code within the OnStart method.
My initial thought is that I might be able to bootstrap this before I open the communication listener - in the hope that the fabric manager only sends requests once this has been done, but I can't find any information on how this lifetime is handled.
There's no "fabric manager" that controls network traffic between your services within the cluster. If your service is up, clients or other services inside the cluster can choose to try to connect to it if they know its address. With that in mind, there are two things you have control over here:
The first is whether or not your service's endpoint is discoverable by other services or clients. This is the point at which your service endpoint is registered with Service Fabric's Naming Service, which occurs when your ICommunicationListener.OpenAsync method returns. At that point, the service endpoint is registered and others can discover it and attempt to connect to it. Of course you don't have to use the Naming Service or the ICommunicationListener pattern if you don't want to; your service can open up an endpoint whenever it feels like it, but if you don't register it with the Naming Service, you'll have to come up with your own service discovery mechanism.
The second is whether or not the node on which your service is running is receiving traffic from the Azure Load Balancer (or any load balancer if you're not hosting in Azure). This has less to do with Service Fabric and more to do with the load balancer itself. In Azure, you can use a load balancer probe to determine whether or not traffic should be sent to nodes.
EDIT:
I added some info about the Azure Load Balancer to our documentation, hope this helps: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/