I have a Web API stateless-service that bring a file from a client and transfers it to an actor-service (for deferred ETL operations). File size limited up to 20MB.
Is it good idea to tranfer file directly (in-memory as a byte array) from one service to another? Or there any feature like a file-based state to replicate file within the cluster and further processing?
P.S. It is impossible (due to legal reasons) to upload it anywhere before processing.
P.P.S. SF cluster is on-premises installation.
It is not a good idea to do that for a few reasons:
1 - If you store your files in Reliable Collections, it would make your collections too big and slow down the replication, as every update to your collections would be replicated to other nodes, it would be also expensive(time) to move services around the cluster.
2 - if you don't store it on any collection and leave it in memory, Service Fabric could try to move your service around the cluster and you risk to loose the data.
3 - When you upload the file, you must return a confirmation to the user, to avoid let them waiting until the processing is complete, locking server resources is a bad idea.
4 - Saving it to the disk won't replicate the file, and if your service move to other nodes, you loose access to the file.
There are many reasons, if can't save it somewhere (like a file share), you can take these risks.
If you still prefer to go this route, I would suggest:
Send the content to the actor that will process it, the actor will save it to the actor state.
Register a timer (or reminder, dependning on your requirementts) in the actor for triggering the processing of this file.
Create the logic after the processing to deactivate the timer and them save the output to somewhere after processed.
Deactivate the actor and delete the state.
Using the actor state to store each file will make it more flexible, you might register the file in your actor at one node, and if the actor is moved, the actor state will be still available when it get activated on other node.
Keep in mind that your cluster has nodes and they may fail, so you
should not rely on their memory or disks to save state, unless
replicated to others with different reliability guarantees, like azure
storage.
Related
Quoting the zookeeper docs
ZooKeeper is a distributed, open-source coordination service for
distributed applications. It exposes a simple set of primitives that
distributed applications can build upon to implement higher level
services for synchronization, configuration maintenance, and groups
and naming.
Guarantees
ZooKeeper is very fast and very simple. Since its goal, though, is to
be a basis for the construction of more complicated services, such as
synchronization, it provides a set of guarantees. These are:
Sequential Consistency - Updates from a client will be applied in the order that they were sent.
Atomicity - Updates either succeed or fail. No partial results.
Single System Image - A client will see the same view of the service regardless of the server that it connects to.
Reliability - Once an update has been applied, it will persist from that time forward until a client overwrites the update.
Timeliness - The clients view of the system is guaranteed to be up-to-date within a certain time bound.
But I don't see any new problem that Zookeeper solves apart from being highly fault tolerant compared to a central database. All the guarantees that zookeeper assures can be guaranteed in a central database too.
Atomicity -> As it's a single node. all updates are atomic.
Sequential Consistency -> after an update clients can wait until the ack until they send the next update to maintain the sequence.
Single System Image, Reliability, Timeliness -> guaranteed as it's a single node.
So, Avoiding a single point of failure is the only main advantage of using zookeeper. Please correct me if I'm wrong.
Zookeeper (and other consensus based systems) offers sequential consistency, strong consistency and high availability.
"apart from being highly fault tolerant" that's actually huge - the fault tolerance.
If you don't care about availability, you totally can use any other linearizable storage - even a directory with files will work.
Consensus based system, and systems based on them (e.g. zoo + your own code) are used to implement machine state replication. All transitions are stored in a distributed log - to make it durable there are many copies. Consensus is about what is the order of event in the log.
With the log being available, the actual business code can consume events and change its state machine - typical state machine transitions. Since each copy of log has the same sequence of events, all states machines will get to the same state.
The key thing is about timing - all logs will get same events in the same order, but there is no guarantee when that happens - a node could be disconnected from the network, hence its log will be stale, and by extension the state machine as well.
To see the true latest value, as you would expect with a singe source of truth, you have to use linearizable read. One way of doing this is to append the read operation to the log itself and wait for it to be committed. Read do nothing with state machines, but the fact that a reader placed something to log and got it committed, that signals that the entire log is read - there is no stale data. (Stale it means that all writes happened before the read are reflected, while read is happening, new writes could happen).
All of this complexity comes form the availability requirements - a cluster with three nodes can let one node to go down, without affecting operations.
So, yes, you could use any linear storage to do the same, ignoring availability. You could do this by keeping the log of events in a table, and every client to track a pointer (or id) of last applied operation; so every client could go and move its own state machine.
I am currently reading up on some distributed systems design patterns. One of the designs patterns when you have to deal with a lot of data (billions of entires or multiple peta bytes) would be to spread it out across multiple servers or storage units.
One of the solutions for this is to use a Consistent hash. This should result in an even spread across all servers in the hash.
The concept is rather simple: we can just add new servers and only the servers in the range would be affected, and if you loose servers the remaining servers in the consistent hash would take over. This is when all servers in the hash have the same data (in memory, disk or database).
My question is how do we handle adding and removing servers from a consistent hash where there are so much data that it can't be stored on a single host. How do they figure out what data to store and what not too?
Example:
Let say that we have 2 machines running, "0" and "1". They are starting to reach 60% of their maximum capacity, so we decide to add an additional machine "2". Now a large part the data on machine 0 has to be migrated to machine 2.
How would we automate so this will happen without downtime and while being reliable.
My own suggested approach would be that the service hosing consistent hash and the machines would have be aware of how to transfer data between each other. When a new machine is added, will the consistent hash service calculate the affected hash ranges. Then inform the affect machine
of the affected hash range and that they need to transfer affected data to machine 2. Once the affected machines are done transferring their data, they would ACK back to the consistent hash service. Once all affected services are done transferring data, the consistent hash service would start sending data to machine 2, and inform the affected machine that they can remove their transferred data now. If we have peta bytes on each server can this process take a long time. We there for need to keep track of what entires where changes during the transfer so we can ensure to sync them after, or we can submit the write/updates to both machine 0 and 2 during the transfer.
My approach would work, but i feel it is a little risky with all the backs and forth, so i would like to hear if there is a better way.
How would we automate so this will happen without downtime and while being reliable?
It depends on the technology used to store your data, but for example in Cassandra, there is no "central" entity that governs the process and it is done like almost everything else; by having nodes gossiping with each other. There is no downtime when a new node joins the cluster (performance might be slightly impacted though).
The process is as follow:
The new node joining the cluster is defined as an empty node without system tables or data.
When a new node joins the cluster using the auto bootstrap feature, it will perform the following operations
- Contact the seed nodes to learn about gossip state.
- Transition to Up and Joining state (to indicate it is joining the cluster; represented by UJ in the nodetool status).
- Contact the seed nodes to ensure schema agreement.
- Calculate the tokens that it will become responsible for.
- Stream replica data associated with the tokens it is responsible for from the former owners.
- Transition to Up and Normal state once streaming is complete (to indicate it is now part of the cluster; represented by UN in the nodetool status).
Taken from https://thelastpickle.com/blog/2017/05/23/auto-bootstrapping-part1.html
So when the joining node is in the Joining State, it is receiving data from other nodes but not ready for reads until the process is complete (Up status).
DataStax also has some material on this https://academy.datastax.com/units/2017-ring-dse-foundations-apache-cassandra?path=developer&resource=ds201-datastax-enterprise-6-foundations-of-apache-cassandra
We appear to have encountered an issue within a Service Fabric cluster where by the state of an Actor service has grown to the point where the Temporary Storage (D:) drive had filled up. As I understand, Actor state and reliable collection state is persisted to to disk on this drive. For one service we had amassed 190 GB of space taken up by the ActorStateStore file.
We were working on the assumption that space was taken up by a lot of stale actors that we no longer needed in our system, so we added a call to the service to purge the unwanted actors using the mechanism detailed here (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-actors-lifecycle).
ActorId actorToDelete = new ActorId(id);
IActorService myActorServiceProxy = ActorServiceProxy.Create(
new Uri("fabric:/MyApp/MyService"), actorToDelete);
await myActorServiceProxy.DeleteActorAsync(actorToDelete, cancellationToken)
We cycled through the full list off actors in state and called the delete on anything we didn't want to keep, which would have been the vast majority of them, expecting this to reduce our space consumption on the temporary storage drive. However, the space did not seem to free up. Does anyone know what the lifecycle of this process is? Are there lead times before we should expect the drive space to appear free?
Is there a mechanism that can be used to free this drive space up or what is the best way to perform housekeeping to remove old actors that we would no longer be interested in?
I've got some run-time data I'd like to exist on a designated actor on every node in my Akka cluster, which could be updated via internal event or API call to a single node. I could store this data in a shared database to make it permanent, but I'd rather just store it in memory for speed, since it doesn't need to be persisted. Akka Cluster Singleton, Distributed Pub Sub, and possibly other built-in modules use gossip protocols to keep distributed state in sync.
Is there a ready-built way to adopt data synchronization of my own actors across my cluster?
I've thought about just publishing changes to Distributed Pub Sub, but it seems like this wouldn't be resilient to dropped messages. If I stored it in a cluster singleton, it wouldn't be survivable if that node went down. I don't need persistence if the entire cluster goes down, but I do want resilience if individual nodes do.
You should have a look at Akka Distributed Data, which should really be called "Akka Replicated Data", as it will replicate the data across all nodes.
It provides a simple key-value store, and any changes made on one node will be replicated to all others. As all data is kept on all nodes, it's best used for small data sets. Also, the values in your key-value pairs need to be CRDTs (conflict free replicated data types). The module comes with some pre-defined CRDTs that cover a lot of use cases.
For example, I have an application that does lots of audit trails writing. Lots. It slows things down. If I create a separate service on my Oracle RAC just for audit CRUD, would that help speed things up in my application?
In other words, I point most of the application to the main service listening on my RAC via SCAN. I take the subset of my application, the audit trail data manipulation, and point it to a separate service listening but pointing same schema as the main listener.
As with anything else, it depends. You'd need to be a lot more specific about your application, what services you'd define, your workloads, your goals, etc. Realistically, you'd need to test it in your environment to know for sure.
A separate service could allow you to segregate the workload of one application (the one writing the audit trail) from the workload of other applications by having different sets of nodes in the cluster running each service (under normal operation). That can help ensure that the higher priority application (presumably not writing the audit trail) has a set amount of hardware to handle its workload even if the lower priority thread is running at full throttle. Of course, since all the nodes are sharing the same disk, if the bottleneck is disk I/O, that segregation of workload may not accomplish much.
Separating the services on different sets of nodes can also impact how frequently a particular service is getting blocks from the local node's buffer cache rather than requesting them from the other node and waiting for them to be shipped over the interconnect. It's quite possible that an application that is constantly writing to log tables might end up spending quite a bit of time waiting for a small number of hot blocks (such as the right-most block in the primary key index for the log table) to get shipped back and forth between different nodes. If all the audit records are being written on just one node (or on a smaller number of nodes), that hot block will always be available in the local buffer cache. On the other hand, if writing the audit trail involves querying the database to get information about a change, separating the workload may mean that blocks that were in the local cache (because they were just changed) are now getting shipped across the interconnect, you could end up hurting performance.
Separating the services even if they're running on the same set of nodes may also be useful if you plan on managing them differently. For example, you can configure Oracle Resource Manager rules to give priority to sessions that use one service over another. That can be a more fine-grained way to allocate resources to different workloads than running the services on different nodes. But it can also add more overhead.