SF Service proxy calling stateless service instances - azure-service-fabric

In my SF application a stateful Worker service communicates with a stateless Logging service using service remoting. The stateful Worker service creates the Logging proxy in its constructor using ServiceProxy.Create<ILoggingService>(loggingServiceUri) and keeps the returned reference for its entire lifetime. There are several stateless Logging service instances running on the cluster (i. e. Instance Count == -1). My question is:
Are calls to the ILoggingService proxy form the Worker service routed to different Logging service instances?

Yes, when you are using SF remoting to talk to a stateless service, your message will be delivered to a random instance. The proxy will keep track of healthy instances for you, and deal with transient errors.

Related

In a service mesh architecture the call from service A to service B must happen through a central component?

Let's say we have the following setup:
Service A consists of a pod in a Kubernetes cluster with two containers Api A and Sidecar A. Api A communicates with the outside world through Sidecar A. Sidecar A is registered as a consumer.
Service B consists of a pod in a Kubernetes cluster with two containers Api B and Sidecar B. Api B communicates with the outside world via Sidecar B. Sidecar B is registered as a producer.
Service A and Service B could potentially have multiple instances.
The services register themselves with the service mesh through a central authority, let's call it Service Discovery, that knows about the specific instances of each service and the endpoints that they expose. Service A can also subscribe to a specific endpoint of Service B via this Service Discovery central authority. (The central authority also deals with security, tokens and certificates but I want to simplify)
Sidecar A and Sidecar B regularly communicate with Service Discovery to confirm availability.
How should Service A call an endpoint of Service B:
directly via a specific url because the Sidecar A should know about the instances of Service B via service discovery and should choose a healthy one?
or indirectly by calling a generic api of Service Discovery which should know what are the healthy instances that can be called and redirect the request to one of them accordingly?
or in some other way?
I found out that the recommended way is for service A to call endpoint B directly via a specific URL because the Sidecar A should know about the instances of Service B via service discovery and should choose a healthy instance.
The purpose of service discovery is just that: to allow services to be discoverable. It should not serve as a proxy between calls.

Pass a dynamically-generated port from one Service to another Service in the same application

I have a Service Fabric Application which consists of two services. Both are stateless services and have a single instance of the services.
In Service A, I would like to define a Endpoint in the Resources section of the ServiceManifest.xml. I don't care what port I get, I just need one and I'd like to get it from Service Fabric so it can ensure it's accessible between the vms managed by the Service Fabric cluster.
In Service B, I'd like to pass the port created for Service A so it can use it to interact with the Service A. I will be defining both services with Service Fabric DNS names, so Service B will know the host of Service A (regardless of where it's running). But Service B also needs to the port that was created for Service A (via it's Endpoint declaration). Is that information passed to the Services? Can it be passed as a parameter, or is there another mechanism.
Thanks for any help
You can discover information about the endpoints of the other service by using the QueryManager on the FabricClient. Example here.
Out of curiosity, can't you use SF remoting for this?

Convert Service Fabric remoting call to REST

Currently in our project, we have few Stateless and Stateful services and then we have an API (which is again a Stateless service). Our API service is exposed over http and run of frontEnd Nodes in cluster. Any client from outside hits the WebAPi stateless service, which inturn can call other services via SF remoting. But other services are not exposed over HTTP. and individual services also can call each other via SF remoting.
As a part of a new requirement, there are some other services hosted in other cloud (Openshift) needs to access our Stateless and Stateful services directly (i.e. without the WebAPI service) over REST. I understand that we can expose our Stateless and Stateful services over http, by writing our own custom HttpCommunicationListener (which should implement "ICommunicationListener"). But apart from this I guess, we would need to configure some reverse proxy and Load Balancer stuff etc, to ensure that one URL works for all the requests.
Is this something, which can be achieved. If yes, can somebody points me to any documentation or code sample ?
I recommend having a look at Traefik as a reverse proxy and load balancer.
You can run it as a (containerized) ingress routing service inside the cluster, and direct HTTP calls to your services.
Here's the documentation.
Here's how to get started.
Here's an example.

Send RPC call to only 1 node in Azure Service Fabric

When I make an RPC (service remoting) call to a service that is deployed on multiple nodes from another service in the same application, it appears to be going to all nodes at once. I only want it to go to one each time the call is made.
Does Service Fabric have a way to do that? How can I leverage the built-in load balancing to control where the call goes to?
This deployed on a local cluster
If your service is stateless and uses Singleton partitioning, calling an operation using the ServiceProxy will invoke the operation on one random service instance. Using SF remoting, you can't target a specific instance.
If your service is stateful, calling an operation using the ServiceProxy (created with a specific ServicePartitionKey) will invoke the operation on one of the replicas of your service, using the primary replica by default.

Fabric Service availability on start

I have a scenario where one of our services exposes WCF hosts that receive callbacks from an external service.
These hosts are dynamically created and there may be hundreds of them. I need to ensure that they are all up and running on the node before the node starts receiving requests so they don't receive failures, this is critical.
Is there a way to ensure that the service doesn't receive requests until I say it's ready? In cloud services I could do this by containing all this code within the OnStart method.
My initial thought is that I might be able to bootstrap this before I open the communication listener - in the hope that the fabric manager only sends requests once this has been done, but I can't find any information on how this lifetime is handled.
There's no "fabric manager" that controls network traffic between your services within the cluster. If your service is up, clients or other services inside the cluster can choose to try to connect to it if they know its address. With that in mind, there are two things you have control over here:
The first is whether or not your service's endpoint is discoverable by other services or clients. This is the point at which your service endpoint is registered with Service Fabric's Naming Service, which occurs when your ICommunicationListener.OpenAsync method returns. At that point, the service endpoint is registered and others can discover it and attempt to connect to it. Of course you don't have to use the Naming Service or the ICommunicationListener pattern if you don't want to; your service can open up an endpoint whenever it feels like it, but if you don't register it with the Naming Service, you'll have to come up with your own service discovery mechanism.
The second is whether or not the node on which your service is running is receiving traffic from the Azure Load Balancer (or any load balancer if you're not hosting in Azure). This has less to do with Service Fabric and more to do with the load balancer itself. In Azure, you can use a load balancer probe to determine whether or not traffic should be sent to nodes.
EDIT:
I added some info about the Azure Load Balancer to our documentation, hope this helps: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/