Service Fabric dynamic partitioning - azure-service-fabric

So I am doing some research into using Service Fabric for a very large application. One thing I need to have is a service that is partitioned by name, which seems fairly trivial at the application manifest level.
However, I really would like to be able to add and remove named partitions on the fly without having to republish the application.
Each partition represents our equivalent of a tenant, and we want to have a backend management app to add new tenants.
Each partition will be a long-running application that fires up a TCP server that uses a custom protocol, and I'll need to be able to query for the address by name from the cluster.
Is this possible with Service Fabric, and if so is there any documentation on this, or something I should be looking for?

Each partition represents our equivalent of a tenant, and we want to have a backend management app to add new tenants.
You need to rethink your model. Partitioning is for distributing data so it accessible fast, for read and write. But within the same logical container.
If you want to do some multitenant in Service Fabric you can deploy an Application multiple times to the cluster.
From Visual Studio it seems you can only have one instance of an Application. This is because in the ApplicationManifest.xml there are DefaultServices defined. This is okay for developing on the local Service Fabric cluster. For production you might want to consider deploying the application with powershell, this will open up the possibility to deploy the same application multiple times with settings for each instance(like: tenant name, security, ... )
And not only Applications can be deployed multiple times, stateful/stateless services as well. So you could have one application and for each tenant you deploy a service of a certain type. Services are findable via the naming service inside Service Fabric, see the FabricClient class for more info on that.

It is not possible to change the partition count for an existing application.
From https://azure.microsoft.com/en-us/documentation/articles/service-fabric-concepts-partitioning/#plan-for-partitioning (emphasis mine):
In rare cases, you may end up needing more partitions than you have initially chosen. As you cannot change the partition count after the fact, you would need to apply some advanced partition approaches, such as creating a new service instance of the same service type. You would also need to implement some client-side logic that routes the requests to the correct service instance, based on client-side knowledge that your client code must maintain.
You are encouraged to do up-front capacity planning to determine the maximum number of partitions you will need - and if you end up needing more, you'll need to implement some special client side handling to cope.

We had the same problem and ended up creating an instance of the service for each tenant. This is pretty easy to do and will scale to any number of tenants.

Related

What is common strategy for synchronized communication between replica's of same PODS?

Lets say we have following apps ,
API app : Responsible for serving the user requests.
Backend app: Responsible for handling the user requests which are long running tasks. It updates the progress to database (postgres) and distributed cache (Redis).
Both apps are scalable service. Single Backend app handles multiple tenants e.g. Customer here but one customer is assigned to single backend app only.
I have a usecase where I need API layer to connect to specific replica which is handling that customer. Do we have a common Pattern for this ?
Few strategies in mind
Pub/Sub: Problem is we want sync guranteed response , probably using Redis
gRPC : Using POD IP to connect to specific pod is not a standard way
Creating a Service at runtime by adding labels to the replicas and use those. -- Looks promising
Do let me know if there is common pattern or example architecture of this or standard way of doing this?
Note :[Above is a simulation of production usecase, names and actual use case is changed]
You should aim to keep your services stateless, in a Kubernetes environment there is no telling when one pod might be replaced by another due to worker node maintenance.
If you have long running task that cannot be completed during the configured grace period for pods to shutdown during a worked node drain/evacuation you need to implement some kind of persistent work queue as your are think about in option 1. I suggest you look into the saga pattern.
Another pattern we usually employ is to let the worker service write the current state of the job into the database and let the client pull the status every few seconds. This does however require some way of handling half finished jobs that might be abandoned by pods that are forced to shutdown.

Guidance on how to make micro-services communicate effectively

We are embarking on a new project development , where we will have multiple micro-services communicating each other to provide information in cloud native system. Our application will be decomposed into multiple services like Text Cleaner , Entities Extractor, Entities Resolver , Output Converter. As you can see in diagram we have some forking where input to one service in required by other service and so forth.
Only one service is going to be exposed outside. Others would be internal. And we have to provide synchronous response to clients.
I wanted to check if some one can guide me here to best patterns:
1- Should we have one Wrapper class which has model classes for all projects as one all of details is needed in final output convertors or how should the data flow so data is sorted out in last micro-service. We want to keep systems loosely coupled and are thinking about how orchestrate this flow without having a middle layer which composes all this data?
2- How to orchestrate this flow? Service Mesh / Api Gateway?
Looks like a workflow based solution.. When so many steps are involved ; the only response you can give to consumer is that request accepted.. and in background the process starts..You cannot let consumer wait for very long because they will get connection time out.
if all these services are deployed on different servers ( which should be the case for Micro services definition for scalability); you can communicate via HTTP or using some messaging solution like JMS or if u are deployed on cloud ; they give workflow based services..

Stateful service service fabric app - remoting, and custom state saving provider

I'm writing a first Azure Service Fabric app applying partitioning to stateful services. I have a few questions:
Can I use remoting instead of HTTP to communicate from my web api to my partitions. The Azure example uses HttpCommunicationListener and I've not been able to see how to use remoting. I would expect remoting would be faster?
Can I persist my state for a given partition using a custom state persistence provider? Will that still be supported by the replication features of service fabric?
Can my stateful service partition save several hundred megabytes of state?
Examples/guidance pointers for above would be greatly appreciated.
Thanks
You can use SF remoting within the cluster, to communicate between services and actors. Http access is usually used to communicate to services from outside the cluster. (but you can still use it from within)
Yes, you can do that by implementing custom IStateProviderReplica2 and likely the serializer. But be aware that this is difficult. (Why would you require this?)
Stateful service storage capacity is limited by disk and memory. (calculation example behind the link)
Reliable services are typically partitioned, so the amount you can
store is only limited by the number of machines you have in the
cluster, and the amount of memory available on those machines.
--- extra info concerning partitioning---
Yes, have a look at this video, the start of it is about how to come up with a partitioning strategy.
The most important downside of 'partition per user' is that the #of partitions cannot be changed without recreating the service. Also, it doesn't scale. And the distribution of data is off balance.

Behaviour when reducing instances of a Bluemix application

I have an orchestrator service which keeps track of the instances that are running and what request they are currently dealing with. If a new instance is required, I make a REST call to increase the instances and wait for the new instance to connect to the orchestrator. It's one request per instance.
The orchestrator tracks whether an instance is doing anything and knows which instances can be stopped, however there is nothing in the API that allows me to reduce the number of instances stopping a particular instance, which is what I am trying to achieve.
Is there anything I can do to manipulate the platform into deterministically stopping the instances that I want to stop? Perhaps by having long running HTTP requests to the instances I require and killing the request when it's no longer required, then making the API call to reduce the number of instances?
Part of the issue here is that I don't know the specifics of the current behavior...
Assuming you're talking about CloudFoundry/Instant Runtime applications, all of the instances of an applications are running behind a load balancer which uses round-robin to distribute requests across the instances (unless you have session affinity cookie set up). Differentiating between each instances for incoming requests or manual scaling is not recommended and it's an anti-pattern. You can not control which instance the scale down task will choose.
If you really want that level of control with each instance, maybe you should deploy them as separate applications. MyApp1, MyApp2, MyApp3, etc. All of your applications can have the same route (myapp.mybluemix.net). Each of the applications can now distinguish themselves by their name (VCAP_APPLICATION) allowing you terminate them.

Sharing Reliable Collections across partitions

Is it possible to share data across Service Fabric partitions using Reliable Collections?
What would be the best approach to run arbitrary number of instances of a CPU/network -bound service that needs to share a small amount of data to be used for custom partitioning algorithm?
Reliable Collections themselves don't share state across partitions, no. But there are a couple ways you can share data depending on the nature of that data:
If the data you need to share is "dynamic" meaning it can change at runtime (e.g., due to user input), then you'd need to encapsulate that data in a separate service of its own, and provide an API for other services to access it. This would be accessible by any other service or application.
If the data you need to share is "static" meaning it doesn't change at runtime, then you can include it in the service as a data package or config package. These packages can be updated individually and separately from the service code without stopping or restarting the service. The same data/config package is available to all partitions of a service, but it is not directly accessible to other services or applications.