I have a stateless service where I have a method and inside which I am creating a class instance.
I have published the service in multiple nodes.
So when I am calling that method from another service, the method will invoke in one of the nodes. So the instance will be available in only that node.
If the node is down, I am losing the class instance.
Is there any way to invoke a method in all Service Fabric nodes?
Or is it possible in stateful services?
If your service have to persist state between failures, you should not use Stateless Services, you should use Stateful Services and put the data you have to persist in a Reliable Collection.
The other approach, if your Class\Object will process something when it receives a call, you could make it an Actor, and the actor state will be replicated to other nodes, if the actor goes down, it's state will be reload when a new instance take over.
If you really need to use a stateless service, you should then persist this class in a cache, like Redis or Memcache.
For you main question, take a look at this other SO questions: Invoke same method on all active instances of a stateless service
Related
As I understand, Dynatrace does not support Service Fabric Actors.
Is there an extensibility mechanism where I could add myself? For example, can I plug myself before and after every ActorProxy call, and/or before and after every method call on an Actor?
You can build a customized ActorProxy to add logging on the caller side.
Next, you can add some code into the Actor to add logging on the Actor side. Use OnPreActorMethodAsync and OnPostActorMethodAsync and some reflection to get context.
I am designing a stateless service which essentially processes a stream of information and then based on conditions sends emails. I want to host this in service fabric, with more than one active in case of failure, however how do I limit the email to be sent from only the "primary".
Is active/active only valid for stateful services which are partitioned?
If the services have to be active/passive then how does the service know when it is now the active one?
There's no built-in mechanism for leader election (that you can use) inside SF. You could use a blob lease.
The leader will be the one who acquires the lease, and needs to refresh it while it's 'alive'. If it crashes, it will lose the lease and another instance can get it.
This does introduce an external dependency, lowering the overall availability % of your system.
You could also create a Stateful service that does something similar.
I will go with Stateful service for couple reasons:
You only want one "primary" to handle the email.
You want a
backup/replica in case the primary went down. This is default by
stateful service
Its difficult with multiple instance of stateless
service. When you have stream of information that handled by multiple
instance. What if the condition for sending email is not happening on
"primary" node. You then have to a separate mechanism to transfer
that data/state to the "primary" node.
Another option is to have a pool of stateless workers that process your data stream, and then whenever it wants to send an email, it'll notify another service (through ServiceRemoting/Rest/ServiceBus/other communication channel) and this service will handle the actual sending of emails.
If this email sending service is stateful, it can then handle duplicates if that's one concern you have.
About the Microsoft.ServiceFabric.Actors.Runtime.Actor, I know it single-threaded execution from the document "An actor is an isolated, independent unit of compute and state with single-threaded execution. ".
Is Microsoft.ServiceFabric.Actors.Runtime.ActorService thread-safe? There are few document about it. We used it in our application, It seems not thread-safe in multi-nodes and multi-instance. Does anyone know about this?
Reliable Actors are packaged in Reliable Services (ActorService) that can be deployed in the Service Fabric infrastructure. Actor instances are activated in a named service instance.
ActorService is just a Stateful Service. Services allow concurrent access.
edit:removed wrong remark about StateManager
What issues are you seeing?
I've been using stateless service programming model but I haven't really override the RunAsync method to run application logic. When would you normally override this method?
Services can have both autonomous behavior and interactive behavior.
You can use CreateServiceInstanceListeners to create a communication listener, which allows interaction with your service.
Your service might (also) need to perform background tasks (not triggered by external callers). For example, it could be monitoring a Queue. You can use RunAsync for that, in there you'd start an endless loop. In the loop you would check the CancellationToken and then check the Queue for items and process them.
Other examples (without loops) are:
service initialization
pre-fetching data
An example is here.
I have an orchestrator service which keeps track of the instances that are running and what request they are currently dealing with. If a new instance is required, I make a REST call to increase the instances and wait for the new instance to connect to the orchestrator. It's one request per instance.
The orchestrator tracks whether an instance is doing anything and knows which instances can be stopped, however there is nothing in the API that allows me to reduce the number of instances stopping a particular instance, which is what I am trying to achieve.
Is there anything I can do to manipulate the platform into deterministically stopping the instances that I want to stop? Perhaps by having long running HTTP requests to the instances I require and killing the request when it's no longer required, then making the API call to reduce the number of instances?
Part of the issue here is that I don't know the specifics of the current behavior...
Assuming you're talking about CloudFoundry/Instant Runtime applications, all of the instances of an applications are running behind a load balancer which uses round-robin to distribute requests across the instances (unless you have session affinity cookie set up). Differentiating between each instances for incoming requests or manual scaling is not recommended and it's an anti-pattern. You can not control which instance the scale down task will choose.
If you really want that level of control with each instance, maybe you should deploy them as separate applications. MyApp1, MyApp2, MyApp3, etc. All of your applications can have the same route (myapp.mybluemix.net). Each of the applications can now distinguish themselves by their name (VCAP_APPLICATION) allowing you terminate them.