Distributed Lock - Using fencing token for preventing concurrent writes to a network file - distributed-computing

I am reading the Designing Data Intensive Application book. In chapter 8, it discusses the use of Fencing Token for preventing concurrent writes to a network file.
The mechanism states that the a Lock service can give out Fencing Tokens, and the Storage Node checks it and rejects lower tokens given by a node whose lock lease has expired. In the particular example, it talked about the case where the node experienced a long GC pause, then tries to write to storage using a stale fencing token.
I am curious about a scenario where the node sends a fencing token which is accepted by the storage node, and writes some data into the storage, then it experienced a long pause which causes the lease to expire. In such a case, would this already leave the network file in a corrupted state? If so, how can this be prevented?
I guess a similar question in nature is - what happens when distributed lock lease expires while a resource is being modified? Does the client automatically extend the lease?
Thanks!

The receiving system has to be aware about fencing tokens; a network file, or ftp, or any other "general access" resource won't be able to deal with the fencing token.
From your question: "the node sends a fencing token which is accepted by the storage node, and writes some data into the storage, then it experienced a long pause which causes the lease to expire" - who is experiencing the long pause?
If it's the storage node - that should be ok, as storage node should be designed around this problem.
If it's the sender node, then the fencing token is the signal for the storage node, that those requests (with older fencing tokens) has to be ignored.
Since many systems don't support fencing tokens, some literature recommends to use "Red Lock algorithm". But Martin Kleppmann (the author of Designing Data Intensive apps) argues that Red Lock is not a correct algorithm.

Related

Druid segments not available from some moment

Ingestion task was success but segments are unavailable from some moment.
I can't restart master node.. because it is not personal server.
Why segments are unavailable? or How can I fix it?
Due to some reasons, it has segments in meta-data database, and tries to download them, but it cannot. Either your historical node's disk is full, or there are some connection issues for it to download the segments from your deep storage (if it is S3 for example, but if your deep storage is a local disk or a mounted network drive, see why it cannot access them).
Segments are available only if historical BE ABLE to download them; historical nodes are instructed by coordinator to download segments.
It may be in process of downloading them and no specific error has occurred, but the performance is poor and it takes time for it to download all of them and make them available
Maybe it cannot (read historical log)
Maybe coordinator has some issues (read coordinator log)
In any case, if a segment is available, the log entries for those possible errors can be either found in historical's log or coordinator's log.

Real life scenarios of when would anyone choose availability over consistency (Who would be in interested in stale data?)

I was trying to wrap my brain around the CAP theorem. I understand that Network partitions can occur (eventually leading to the nodes in the cluster not able to sync up with the WRITE operations happening on the other nodes.)
In this case, either the Cluster could still be up and the load-balancer in front of the cluster could route the request to any of the nodes and after a WRITE operation on one of the nodes, the other nodes who can't sync with that data, still have STALE data and any subsequent READS to these nodes will serve STALE data.
[So we are Loosing CONSISTENCY as we choose AVAILABILITY (i.e., we have choose the cluster to give STALE responses back.)]
Or we could SHUTDOWN the cluster whenever a network partition occurs! (There by loosing AVAILABILITY as we don't want to hamper consistency among the nodes.)
I have 2 things I would like to know the answer for it:
In Reality, When would anyone choose to be AVAILABLE and still trade off CONSISTENCY? Who on this earth (practically) would be interested in STALE data?
Please help me understand by listing more than one scenarios.
In case, we would like to choose CONSISTENCY over AVAILABILITY,
the cluster is down. Who on earth (real-time scenarios) practically would accept to design their system to be DOWN in order to preserve CONSISTENCY.
Please list some scenarios.
Won't majority of us look for High availability no matter what? what are our options? please enlighten.
If I send you a message on FB and you send one to me, I'd rather prefer to see messages in an incorrect order(message sent at 1pm comes before message sent at 2pm) rather than not seeing them at all(example of AVAILABILITY of messages prefered over read-after-write CONSISTENCY of messages). Another example, If I gather web site metrics, I'd rather skip or drop some signal rather then force my users to wait for a page load while my consistent transaction is stuck.
Keep in mind that consistency doesn't mean STALE data, also data can be inconsistent in different ways(https://aphyr.com/posts/313-strong-consistency-models)
Financial transactions are a classic example of data that requires consistency over availability. As a bank, I'd rather decline user request for money transfer, than accept it and lose customer's money due to DB being down.
I'd like to point out that CAP theorem is a high-level concept. There are a lot of ways you can treat terms consistency, availability or even partitioning, and different businesses have different requirements. Software engineering as a whole and distributed systems engineering, in particular, is about making trade-offs.
An example where you may choose Availability over Consistency is collaborative editing (e.g. Google Docs). It may be perfectly acceptable (and in fact desirable) to allow users to make local modifications to the documents and deal with conflict resolution once network is restored.
A bank ATM is an example where you'd choose Consistency over Availability. Once ATM is disconnected from the network you would not want to allow withdrawals (thus, no Availability). Or, you could pick partial Availability, and allow deposits or read-only access to your bank statements.

Does this make sense for Orleans or SF and if so guidance please

We’re working to take our software to Azure cloud and looking at Orleans and Service Fabric (SF) as potential frameworks. We need to:
Populate our analysis engines with lots of data (e.g., 100MB to 2GB) per engine instance.
Maintain that state, and if an engine instance goes idle for say 20 minutes or more, we’d like to unload it (i.e., and not pay for the engine instance resource).
Each engine instance will support one to several end users with a specific data set.
Each engine instance can be highly interactive generating lots of plot data near realtime. We’re maintaining state as we don’t want to pay the price to populate engine instance for each engine interaction.
An engine instance action can take a few seconds, a few minutes, to even tens of minutes. We’ll want some feedback.
Users may access an engine instance every few seconds (e.g., to steer the engine towards a result based on feedback) and will want live plot data.
Each user will want to talk to a specific engine instance.
As a user expresses interest in running a simulation (i.e., standing up an engine instance), ideally we want him to choose small/medium/large computing resource to run his engine instance (i.e., based on the problem he’s trying to solve he may want more or less computing/memory power).
We’re considering Orleans and SF but we’re having difficulty specifying architecture based on above requirements. We’ve considered:
Trying to think about an SF partition, or an Orleans silo as an ‘engine instance’ described above.
Leveraging both Orleans and SF notion of fault tolerance through replication.
Leveraging local (i.e., to partition or silo) storage to store results and maintain state (i.e., for long periods or until idle for 20 minutes).
We’ve not understood how to:
Limit a silo or a partition to a single engine instance so that we can control resourcing of the engine instance.
Keep a user’s engine instance data separate from another users engine instance data.
Direct a request from a user (e.g., through a web API) to a particular engine instance.
Does this make sense for Orleans, does it make more sense for SF? Any pointers on how to implement the above would be helpful.
When you say SF I assume you mean SF Actors right?
You can use them the way you want, but in both cases does not look as the right solution for your problem, because:
Actors are single threaded, if you plan to share the same instance with multiple clients, each one would have to wait for the previous one to finish before it start processing anything. If you need to monitor the status of a running actor, you would have to make the actor publish the updates to external subscribers.
Actor state is isolated, so you can't access the state of other actors, the way to do it is provide a method to return it, but if the actor is running a command you have to wait the completion, unless you make a separate state service to hold the processed data.
You can't limit the resources required for a actor, in service fabric you specify the resources needed for a service, but you can't do it for actors, and you can't limit the resources they use, when they hit the limit, service fabric will try to balance the resources for your, but nothing prevent the process to consume more memory than requested.
Both actor services communicates using the ask approach, so they will "block" the caller waiting for an answer, it is asynchronous but you still have to keep the caller 'waiting'. (block and wait is because there is not an idea of fire and forget like Akka that uses the Tell approach, where it delivery the message and forget.)
Based on some of your requirements, I think a containers would be a better approach. Because:
You can limit the resource consumption for each container
The data is isolated inside the container and not visible to others
But on containers you have to manage the replication and partitioning by yourself, so in this case I would recommend the best of both worlds:
Create SF services to host the shared data sets between the the users
SF Service+Actor to only store the results of users simulations.
Containers to run the simulations and send updates to actors
This is just an example, it all will depend on your requirements, architecture and how data will be isolated from each other.

how high frequency trading system connects to exchange

I'm trying to study about high frequency trading systems. Whats the mechanism that HFT use to connect with the exchange and whats the procedure (does it has to go through a broker or is it direct access, if it's direct access what sort of connection information that i require)
Thanks in advance for your answers.
Understand that there are two different "connections" in an HFT engine. The first is the connection to a market data source. The second is to a clearing resource. As mentioned in kpavlov's answer, a very expensive COLO (co-location) is needed to get as close to the data source/target as possible. Depending on their nominal latency these COLO resources cost thousands of dollars per month.
With both connections, your trading engine must be certified by the provider (ICE, CME, etc) to comply with their requirements. With CME the certification process is automated, with ICE it employs human review. In any case, the certification requires that your software demonstrate conformance to standards and freedom from undesirable network side effects.
You must also subscribe to your data source(s) and clearing service, neither is inexpensive and pricing varies over a pretty wide range. During the subscription process you'll gain access to the service providers technical data specification(s)-- a critical part of designing your trading engine. Using old data that you find on the Internet for design purposes is a recipe for problems later. Subscription also gets you access to the provider(s) test sites. It is on these test sites that you test and debug your engine.
After you think you engine is ready for deployment you begin connecting to the data/clearing production servers. This connection will get you into a place of shadows-- port roulette. Not every port at the provider's network edge has the same latency. Here you'll learn that you can have the shortest latency yet seldom have orders filled first. Traditional load balancing does little to help this and CME has begun deployment of FPGA-based systems to ensure correct temporal sequencing of inbound orders, but it's still early in its deployment process.
Once you're running you then get to learn that mistakes can be very expensive. If you place an order prior to a market pre-open event the order is automatically rejected. Do it too often and the clearing provider will charge you a very stiff penalty. Other things can also get you penalized or even kicked-off the service if your systems are determined to be implementing strategies to block others from access, etc.
All the major exchanges web sites have links to public data and educational resources to help decide if HFT is "for you" and how to go about it.
It usually requires an approval from exchange to grant access from outside. They protect their servers by firewalls so your server/network need to be authorized to access.
Special certification procedure with technician (by phone) is usually required before they authorize you.
Most liquidity providers use FIX protocol or custom APIs. You may consider starting implementing your connector with QuickFix, but it may become a bottleneck later, when your traffic will grow.
Information you need to access by FIX is:
Server IP
Server port
FIX protocol credentials:
SenderCompID
TargetCompID
Username
Password
Other fields

Interprocess messaging - MSMQ, Service Broker,?

I'm in the planning stages of a .NET service which continually processes incoming messages, which involves various transformations, database inserts and updates, etc. As a whole, the service is huge and complicated, but the individual tasks it performs are small, simple, and well-defined.
For this reason, and in order to allow for easy expansion in future, I want to split the service into several smaller services which basically perform part of the processing before passing it onto the next service in the chain.
In order to achieve this, I need some kind of intermediary messaging system that will pass messages from one service to another. I want this to happen in such a way that if a link in the chain crashing or is taken offline briefly, the messages will begin to queue up and get processed once the destination comes back online.
I've always used message queuing for this type of thing, but have recently been made aware of SQL Service Broker which appears to do something similar. Is SQLSB a viable alternative for this scenario and, if so, would I see any performance benefits by using that instead of standard Message Queuing?
Thanks
It sounds to me like you may be after a service bus architecture. This would provide you with the coordination and fault tolerance you are looking for. I'm most familiar and partial to NServiceBus, but there are others including Mass Transit and Rhino Service Bus.
If most of these steps initiate from a database state and end up in a database update, then merging your message storage with your data storage makes a lot of sense:
a single product to backup/restore
consistent state backups
a single high-availability/disaster recoverability solution (DB mirroring, clustering, log shipping etc)
database scale storage (IO capabilities, size and capacity limitations etc as per the database product characteristics, not the limits of message store products).
a single product to tune, troubleshoot, administer
In addition there are also serious performance considerations, as having your message store be the same as the data store means you are not required to do two-phase commit on every message interaction. Using a separate message store requires you to enroll the message store and the data store in a distributed transaction (even if is on the same machine) which requires two-phase commit and is much slower than the single-phase commit of database alone transactions.
In addition using a message store in the database as opposed to an external one has advantages like queryability (run SELECT over the message queues).
Now if we translate the abstract terms 'message store in the database as being Service Broker and 'non-database message store' as being MSMQ, you can see my point why SSB will run circles any time around MSMQ.
My recent experiences with both approaches (starting with Sql Server Service Broker) led me to the situation in which I cry for getting my messages out of SQL server. The problem is quasi-political but you might want to consider it: SQL server in my organisation is managed by a specialized DBA while application servers (i.e. messaging like NServiceBus) by developers and network team. Any change to database servers requires painful performance analysis from DBA and is immersed in fear that we might get standard SQL responsibilities down by our queuing engine living in the same space.
SSSB is pretty difficult to manage (not unlike messaging middleware) but the difference is that I am more allowed to screw something up in the messaging world (the worst that may happen is some pile of messages building up somewhere and logs filling up) and I can't afford for any mistakes in SQL world, where customer transactional data live and is vital for business (including data from legacy systems). I really don't want to get those 'unexpected database growth' or 'wait time alert' or 'why is my temp db growing without end' emails anymore.
I've learned that application servers are cheap. Just add message handlers, add machines... easy. Virtually no license costs. With SQL server it is exactly opposite. It now appears to me that using Service Broker for messaging is like using an expensive car to plow potato field. It is much better for other things.