Can you add or remove Service Fabric statefull service named partitions - azure-service-fabric

It is mentioned in the documentation that we cannot change the partition count on service fabric statefull services after the service is deployed. Is it the similar case if named partitions are used. Can we add or remove new named partitions to the statefull service in its lifetime, is there a side effect ?

As a general rule, no, the partitions cannot be modified after the service is created regardless of how you approach it (e.g. Int64, Named, or Singleton). The definition is indicated in your ApplicationManifest.xml and should not change.
That said if you estimated the partition count incorrectly when you originally designed the solution, there's no particular reason you can't simply create a v2 of your service with the newly realized partition count and either introduce logic in it or create a (likely singleton) stateless migration service to move your data from one service to the other, mapping the data between the partitions as you see fit.
Once migrated, you could save a backup of your state in either service (just in case), update the references in your solution to point to the new service and delete the old.
It's not as clean as some provided migration experience, but it does get the job done.

Related

Microservice data replication patterns

In a microservice architecture, we usually have two ways for 2 microservices to communicate. Let’s say service A needs to get information from service B. The first option is a remote call, usually synchronous over HTTPS, so service A query an API hosted by service B.
The second option is adopting an event-driven architecture, where the state of service B can be published and consumed by service A in an asynchronous way. Using this model, service A can update its own database with the information from the service B’s events and all queries are made locally in this database. This approach has the advantage of a better decoupling of microservices, from development until operations. But it comes with some disadvantages related to data replication.
The first one is the high consumption of disk space, since the same data can reside in the databases of the microservices that need it. But the second one is worst in my opinion: data can become stale if service B can’t process its subscription as fast as needed, or it can’t be available for service A at the same time it’s created at service B, given the eventual consistency of the model.
Let’s say we’re using Kafka as an event hub, and its topics are configured to use 7 days of data retention. Service A is kept in sync as service B publishes its state. After two weeks, a new service C is deployed and its database needs to be enriched with all information that service B holds. We can only get partial information from Kafka topics since the oldest events are gone. My question here is what are the patterns we can use to achieve this microservice’s database enrichment (besides asking service B to republish all its current state to the event hub).
There are 2 options:
You can enable log compaction for Kafka for an individual topic. That will keep the most recent value for a given key discarding old updates. This saves space and also holds more data than the normal mode for a given retention period
Assuming you take a backup of service B DB on a daily basis, on introduction of a new service C, you need to first create the initial state of C from the latest backup of B and then replay the Kafka topic events from the particular offset id that represents the data after the backup.
Your concern is right but at the same time Microservices approach is give and take. You get loose coupling at the cost of individual data base for each service. There is no right answer to microservices architecture and really depends on what you are trying to achieve.
According to CAP theorem you have to compromise between consistency and availability and in most cases we go with eventual consistency . If your service A is not consistent with B then it will eventually be and that's the trade off at the cost of availability.
Another thing regarding microservice is that you only keep the reference of data from other service and may be very limited actual data from other service but definitely not much. And that too only if replicating the data is making your service independent and autonomouse, if you can't achieve any of it even after replicating the data then there is no point. e.g. Your shipping service will have complete history of order transition , but your booking service only have the latest status of order (e.g. in transit , On board etc) . User goes to booking and you show the current status of the order. But if user click details you get all the order transition history from shipping microservice. Now at some point your shipping service goes down and your user comes to check the status you at-least have current order status even when you can't show the details because order status is replicated in the booking service.
Regarding new services joining the system at later stage , Event sourcing is the pattern that you use for these kind of scenarios. Its complex pattern but it will bring your newly added services to the state at which you want them to be. You basically save all your events in an event store and replay them to attain the current state of the system and pre-populate service C database with those events.

Stateful or Stateless service for processing servicebus queues

I have a Session enabled Azure servicebus queue. I need some form of service that can read from the queue and process them and save the result (in memory for later retrieval). We are using azure servicefabric in our current architecture. I got few questions regarding which one to choose Stateful or Stateless service.
If I use Stateful service, then based on the documentation my understanding is, service will be running on 1 primary node (assuming 1 partition) and 2 active secondary nodes. That means, if I have a 10 node Service fabric cluster, then this stateful service will be utilizing only one node (VM) primarily.
So if I add a listener to this stateful service to read messages from Queues then that service on primary node will read messages from queues and all other remaining 9 nodes wont be able to utilized. Is this correct?
Whereas if I use Stateless service, I can create instances on all 10 nodes and all of them could listen to the message in Queues and process them in parallel. However, I will loose the option to save the results.
Please advise.
So if I add a listener to this stateful service to read messages from Queues then that service on primary node will read messages from queues and all other remaining 9 nodes wont be able to utilized. Is this correct?
That is correct. With stateful service scenario, only the primary replica will have it's listener executed, and work will be done. Other replicas can be used in read-only mode, but they would not be writing anything into reliable collections.
Whereas if I use Stateless service, I can create instances on all 10 nodes and all of them could listen to the message in Queues and process them in parallel.
Exactly. Stateless services can perform their work in parallel and no state is persisted. That's also the reason whey there's no reliable collection available for this Service Fabric model.
However, I will loose the option to save the results.
Not necessarily true. You could still save your data in a centralized/shared DB, just like you'd do with stateless solutions in the past (for example Cloud Services, or a Azure WebApp).
What you should ask yourself is what problem are you solving. If you have data sharding, the Statful makes more sense. If you don't have data sharding and/or you need to scale out your processing power, rather that scale up, Stateless is a better approach.

How to run something on each node in service fabric

In a service fabric application, using Actors or Services - what would the design be if you wanted to make sure that your block of code would be run on each node.
My first idea would be that it had to be a Service with instance count set to -1, but also in cases that you had set to to 3 instances. How would you make a design where the service ensured that it ran some operation on each instance.
My own idea would be having a Actor with state controlling the operations that need to run, and it would itterate over services using serviceProxy to call methods on each instance - but thats just a naive idea for which I dont know if its possible or if it is the proper way to do so?
Some background info
Only Stateless services can be given a -1 for instance count. You can't use a ServiceProxy to target a specific instance.
Stateful services are deployed using 1 or more partitions (data shards). Partition count is configured in advance, as part of the service deployment and can't be changed automatically. For instance if your cluster is scaled out, partitions aren't added automatically.
Autonomous workers
Maybe you can invert the control flow by running Stateless services (on all nodes) and have them query a 'repository' for work items. The repository could be a Stateful service, that stores work items in a Queue.
This way, adding more instances (scaling out the cluster) increases throughput without code modification. The stateless service instances become autonomous workers.
(opposed to an intelligent orchestrator Actor)

Changing number of partitions for a reliable actor service

When I create a new Service Fabric actor the underlying (auto generated) actor service is configured to use 10 partitions.
I'm wondering how much I need to care about this value?
In particular, I wonder whether the Actor Runtime has support for changing the number of partitions of an actor service on a running cluster.
The Partition Service Fabric reliable services topic says:
In rare cases, you may end up needing more partitions than you have initially chosen. As you cannot change the partition count after the fact, you would need to apply some advanced partition approaches, such as creating a new service instance of the same service type. You would also need to implement some client-side logic that routes the requests to the correct service instance, based on client-side knowledge that your client code must maintain.
However, due to the nature of Actors and that they are managed by the Actor Runtime I'm tempted to believe that it would indeed be possible to do this. -- That the Actor Runtime would be able to take care of all the heavylifting required to re-partition actor instances.
Is that at all possible?
The number of partitions in a running service cannot be changed. This is true of Actors as well as Reliable Services. Typically, you would want to pick a large number of partitions (more than the number of nodes) up front and then scale out the number of nodes in the cluster instead of trying to repartition your data on the fly. Take a look at Abhishek and Matthew's comments in the discussion here for some ideas on how to estimate how many partitions you might need.

How can I reach a specific replica of a stateless service

I've created a stateless service within Service Fabric. It has a SingletonPartition, but multiple instances (InstanceCount is -1 in my case).
I want to communicate with a specific replica of this service. To find all replica's I use:
var fabricClient = new FabricClient();
var serviceUri = new Uri(SERVICENAME);
Partition partition = (await fabricClient.QueryManager.GetPartitionListAsync(serviceUri)).First();
foreach(Replica replica in await fabricClient.QueryManager.GetReplicaListAsync(partition.PartitionInformation.Id))
{
// communicate with this replica, but how to construct the proxy?
//var eventHandlerServiceClient = ServiceProxy.Create<IService>(new Uri(replica.ReplicaAddress));
}
The problem is that there is no overload of the ServiceProxy to create one to the replica. Is there another way to communicate with a specific replica?
Edit
The scenario we are building is the following. We have different moving parts with counter information: 1 named partitioned stateful service (with a couple of hundred partitions), 1 int64 partitioned stateful service, and 1 actor with state. To aggregate the counter information, we need to reach out to all service-partitions and actor-instances.
We could of course reverse it and let everyone send there counts to a single (partitioned) service. But that would add a network call in the normal flow (and thus overhead).
Instead, we came up with the following. The mentioned services&actors are combined into one executable and one servicemanifest. Therefore they are in the same process. We add a stateless service with instancecount -1 to the mentioned services&actors. All counter information is stored inside a static variable. The stateless service can read this counter information.
Now, we only need to reach out to the stateless service (which has an upper limit of the number of nodes).
Just to get some terminology out of the way first, "replica" only applies to stateful services where you have a unique replica set for each partition of a service and replicate state between them for HA. Stateless services just have instances, all of which are equal and identical.
Now to answer your actual question: ServiceProxy doesn't have an option to connect to a specific instance of a deployed stateless service. You have the following options:
Primary replica: connect to the primary replica of a stateful service partition.
Random instance: connect to a random instance of a stateless service.
Random replica: connect to a random replica - regardless of its role - of a stateful service partition.
Random secondary replica - connect to a random secondary replica of a stateful service partition.
E.g.:
ServiceProxy.Create<IMyService>(serviceUri, partitionKey, TargetReplicaSelector.RandomInstance)
So why no option to connect to a specific stateless service instance?
Well, I would turn this question around and ask why would you want to connect to a specific stateless service instance? By definition, each stateless instance should be identical. If you are keeping some state in there - like user sessions - then now you're stateful and should use stateful services.
You might think of intelligently deciding which instance to connect to for load balancing, but again since it's stateless, no instance should be doing more work than any other as long as requests are distributed evenly. And for that, Service Proxy has the random distribution option.
With that in mind, if you still have some reason to seek out specific stateless service instances, you can always use a different communication stack - like HTTP - and do whatever you want.
"Well, I would turn this question around and ask why would you want to connect to a specific stateless service instance?"
One example would be if you have multiple (3x) stateless service instances all having WebSocket connections to different clients, let's say 500 each. And you want to notify all 1500 (500x3) users of the same message, if it was possible to connect directly to a specific instance (which I would expect was possible, since I can query for those instances using the FabricClient), I could send a message to each instance which would redirect it to all connected clients.
Instead we have to come up with any of multiple workarounds:
Have all instances connect to some evented system that allows them to trigger on incoming message, e.g. Azure Event Hubs, Azure Service Bus, RedisCache.
Host an additional endpoint, as mentioned here, which makes it 3 endpoints pr service instance: WCF, WebSocket, HTTP.
Change to a stateful partitioned service which doesn't hold any state or any replicas, but simply allows to call partitions.
Currently having some serious issues with RedisCache so migrating away from that, and would like to avoid external dependencies such as Event Hubs and Service Bus just for this scenario.
Sending many messages each second, which will give additional overhead when having to call HTTP, and then the request need to transition over to the WebSocket context.
In order to target a specific instance of stateless service you can use named partitions. You can have a single instance per partition and use multiple Named partitions. For example, you can have 5 named partitions [0,1,2,3,4] each will have only one instance of the "service". Then you can call it like this
ServiceProxy.Create<IMyService>(serviceUri, partitionKey, TargetReplicaSelector.RandomInstance)
where partitionKey parameter will have one of values [0,1,2,3,4].
the real example would be
_proxyFactory.CreateServiceProxy<IMyService>(
_myServiceUri,
new ServicePartitionKey("0"), // One of "0,1,2,3,4"
TargetReplicaSelector.Default,
MyServiceEndpoints.ServiceV1);
This way you can choose one of 5 instances. But all 5 instancies may not be always available. For example during startup or when the service dies and SF is recreating or it is in InBuild stage... So for this reason you should run Partition discovery