How FIX Protocol handles chain of request - fix-protocol

I have a question regarding FIX Protocol and I'm quite new to it.
A client sent an order and it got accepted by the broker. After that, the client sent a request (request #1) to modify the quantity to x. However, before request #1 is accepted, the client sent another modification request (request #2) to modify the quantity to y.
I search the documentation of FIX Protocol and find these.
The order sender should chain client order ids on an ‘optimistic' basis, i.e. set the OrigClOrdID <41> to the last non rejected ClOrdID <11> sent
The order receiver should chain client order ids on a ‘pessimistic' basis, i.e. set the OrigClOrdID <41> on execution reports that convey the receipt or succesful application of a cancel/replace and Order Cancel Reject <9> messages to be the last ‘accepted' ClOrdID <11> (See "Order State Change Matrices" for examples of this)
However, I still don't understand how FIX Protocol handles request. Will the quantity be modified to y? Or does it depend on which request is being accepted last?

You need to understand that the FIX Protocol is a set of guidelines on how someone who implements the protocol should handle certain scenarios. In practice there are differences in how counterparties handle this.
In your example the quantity should be modified to y in the end. But the quantity will be modified to x first since that message was received at first.
Here are some chaining examples taken from the spec:
https://www.onixs.biz/fix-dictionary/4.4/app_d.html (search for sequencing or chaining)
Here are examples specific to your question where two consecutive replace requests are handled:
https://www.onixs.biz/fix-dictionary/4.4/app_dD.2.a.html
https://www.onixs.biz/fix-dictionary/4.4/app_dD.2.b.html

Related

Server return status 200 but client doesn't receive it because network connection is broken

I have REST service and client (Android app) that send POST request to REST service. On client side there are documents (orders) that need to be synchronized with web server. Synchronization means that client sends POST request to REST service for each order. When REST service receive POST request it writes data to database and sends response with status 200 to client. Client receives 200 and mark that order as synchronized.
Problem is when connection is broken after a server sent status 200 response but before client received response. Client doesn't mark order as synchronized. Next time client sends again this order and servers write it again in database so we have same order two times.
What is good practice to deal with this kind of problem?
Problem is when connection is broken after a server sent status 200 response but before client received response. Client doesn't mark order as synchronized. Next time client sends again this order and servers write it again in database so we have same order two times.
Welcome to the world of unreliable messaging.
What is good practice to deal with this kind of problem?
You should review Nobody Needs Reliable Messaging, by Marc de Graauw (2010).
The cornerstone of reliable messaging is idempotent request handling. Idempotent semantics are described this way
A request method is considered "idempotent" if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request.
Simply fussing with the request method, however, doesn't get you anything. First, the other semantics in the message may not align with the idempotent request methods, and second the server needs to know how to implement the effect as intended.
There are two basic patterns to idempotent request handling. The simpler of these is set, meaning "overwrite the current representation with the one I am providing".
// X == 6
server.setX(7)
// X == 7
server.setX(7) <- a second, identical request, but the _effect_ is the same.
// X == 7
The alternative is test and set (sometimes called compare and swap); in this pattern, the request has two parts - a predicate to determine is some condition holds, and the change to apply if the condition does hold.
// X == 6
server.testAndSetX(6,7)
// X == 7
server.testAndSetX(6,7) <- this is a no op, because 7 != 6
// X == 7
That's the core idea.
From your description, what you are doing is manipulating a collection of orders.
The same basic idea works there. If you can calculate a unique identifier from the information in the request, then you can treat your collection like a set/key-value store.
// collection.get(Id.of(7)) == Nothing
collection.put(Id.of(7), 7)
// collection.get(Id.of(7)) == Just(7)
collection.put(Id.of(7), 7) <- a second, identical request, but the _effect_ is the same.
// collection.get(Id.of(7)) == Just(7)
When that isn't an option, then you need some property of the collection that will change when your edit is made, encoded into the request
if (collection.size() == 3) {
collection.append(7)
}
A generic way to manage something like this is to consider version numbers -- each time a change is made, the version number is incremented as part of the same transaction
// begin transaction
if (resource.version.get() == expectedVersion) {
resource.version.set(1 + expectedVersion)
resource.applyChange(request)
}
// end transaction
For a real world example, consider JSON Patch, which includes a test operation that can be used as a condition to prevent "concurrent" modification of a document.
What we're describing in all of these test and set scenarios is the notion of a conditional request
Conditional requests are HTTP requests [RFC7231] that include one or more header fields indicating a precondition to be tested before applying the method semantics to the target resource.
What the conditional requests specification gives you is a generic way to describe conditions in the meta data of your requests and responses, so that generic http components can usefully contribute.
Note well: what this works gets us is not a guarantee that the server will do what the client wants. Instead, it's a weaker: that the client can safely repeat the request until it receives the acknowledgement from the server.
Surely your documents must have an unique identifier. The semantically correct way would be to use the If-None-Match header where you send that identifier.
Then the server checks whether a document with that identifier already exists, and will respond with a 412 Precondition Failed if that is the case.
One of possible options would be validation on server side. Order should have some uniqueness parameter: name or id or something else. But this parameter should be send by client also. Then you get this value (e.x. if name is unique and client send it), find this order in database. If order is founded then you don't need to save it into database and should send 409 Conflict response to client. If you din't find such order in database then you save it and send 201 Ok response.
Best practices:
201 Ok for POST
409 Conflict - if resource already exists
Your requests should be idempotent.
From your description, you should be using PUT instead of POST.
Client side generated Ids (guids) and Upsert logic server side, help achieve this.
This way you can implement a retry logic client side for failed requests, without introducing multiple records.

A situation when HTTP put is not idempotent

Consider the following scenario:
Alice updates item1 using http put
Bob updates item1 using http put with different data
Alice updates item1 using http put again with the same data accidentally, for instance, using the back button in a browser
Charlie reads the data
Is this idempotent?
Is this idempotent?
Yes. The relevant definition of idempotent is provided by RFC 7231
A request method is considered "idempotent" if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request.
However, the situation you describe is that of a data race -- the representation that Charlie receives depends on the order that the server applies the PUT requests received from Alice and Bob.
The usual answer to avoiding lost writes is to use requests that target a particular version of the resource to update; this is analogous to using compare and swap semantics on your request -- a write that loses the data race gets dropped on the floor
For example
x = 7
x.swap(7, 8) # Request from Alice changes x == 7 to x == 8
x.swap(8, 9) # Request from Bob changes x == 8 to x == 9
x.swap(7, 8) # No-Op, this request is ignored, x == 9
In HTTP, the specification of Conditional Requests gives you a way to take simple predicates, and lift them into the meta data so that generic components can understand the semantics of what is going on. This is done with validators like eTag.
The basic idea is this: the server provides, in the metadata, a representation of the validator associated with the current representation of the resource. When the client wants to make a request on the condition that the representation hasn't changed, it includes that same validator in the request. The server is expected to recalculate the validator using the current state of the server side resource, and apply the change only if the two validator representations match.
If the origin server rejects a request because the expected precondition headers are missing from the request, it can use 428 Precondition Required to classify the nature of the client error.
Yes, this is idempotent. If it is wrong behavior for you, we should know bussiness logick behind that.

How to handle network connectivity loss in the middle of REST POST request?

REST POST is used to create resources.
Let's say we have resource url
"http://example.com/cars"
We want to create a new car.
We POST to "http://example.com/cars" with JSON payload containing car properties (color, weight, model, etc).
Server receives the request, creates a new car, sends a response over the network.
At this point network fails (let's say router stops working properly and ignores every packet).
Client fails with TCP timeout (like 90 seconds).
Client has no idea whether car was created or not.
Also client haven't received car resource id, so it can't GET it to check if it was created.
Now what?
How do you handle this?
You can't simply retry creating, because retrying will just create a duplicate (which is bad).
REST POST is used to create resources.
HTTP POST is used for lots of things. REST doesn't particularly care; it just wants resources that support a uniform interface, and hypermedia.
At this point network fails
Bummer!
Now what? How do you handle this? You can't simply retry creating, because retrying will just create a duplicate (which is bad).
This is a general messaging concern, not directly related to REST. The most common solution is to use the Idempotent Receiver pattern. In short, you
need to define your messages so that the receiver has enough information to recognize the request as something that has already been done.
Ideally, this is being supported at the business level.
Idempotent collections of values are often straight forward; we just need to be thinking sets, rather than lists.
Idempotent collections of entities are trickier; if the request includes an identifier for the new entity, or if we can compute one from the data provided, then we can think of our collection as a hash.
If none of those approaches fits, then there's another possibility. Instead of performing an idempotent mutation of the collection, we make the mutation of the collection itself idempotent. Think "compare and swap" - we encode into the request information that identifies the current state of the collection; is that state is still current when the request arrives, then the mutation is applied. If the condition does not hold, then the request becomes a no-op.
Translating this into HTTP, we make a small modification to the protocol for updating the collection resource. First, we GET the current representation; and in the meta data the server provides validators that can be used in subsquent requests. Having obtained the validator, the client evaluates the current representation of the resource to determine if it needs to be changed. If the client decides to make a change, then submits the change with an If-Match or an If-Unmodified-Since header including the validator. The server, before processing the requests, then considers the validator, immediately abandoning the request with 412 Precondition Failed.
Thus, if a conditional state-changing request is lost, the client can at its own discretion repeat the request without concern that server will misunderstand the client's intent.
Retry it a limited number of times, with increasing delays between the attempts, and make sure the transaction concerned is idempotent.
because retrying will just create a duplicate (which is bad).
It is indeed, and it needs fixing, see above. It should be impossible in your system to create two entries with the same attributes. This is easily accomplished at the database level. You can attain idempotence by having the transaction return the same thing whether the entry already existed or was newly created. Or else just have it return EXISTS if the entry already exists, and adjust your client accordingly.

Coordinating checking in the scoreboard

I'm having some trouble solving a issue in my code, I hope you can help me.
I have two modules, A and B. Module A does requests to B, and after a number of cycles B sends a multi-cycle response to A. A can hold up to 8 requests waiting to be responded, and the responses from B don't necessarily come back ordered. That's why we use an ID, to identify the returning data.
To verify this behaviour, I have a scoreboard with several checkers. One of the checkings that I do is if the ID used for a request is free or not. To do that, I keep an associative array with the IDs pending to be responded, and I insert, check and delete items as needed. I control this from two interfaces and monitors, one for the requests and another one for the responses. The response monitor, being the responses more than one cycle long, waits until it has all the data to send a transaction to the scoreboard, where I update my structs.
The problem comes the moment that once that A sees that it is actually getting a valid response from B, frees the ID and can use it for a new request. That is happening in some of my simulations, and since I won't receive the transaction until all the response is complete, block A is doing a new request with an ID I won't know it's legit to use until I get the complete transaction from the monitor.
Any ideas on how to solve this? Thanks!
In the cycle that you see a response from B, why don't you move the request from A into another associative array, one that represents responses that have been initiated. That way you'll have a free slot in original array to handle new requests from A, but now you'll have the new, second array to handle the multi-cycle responses that have already begun.

What is the relationship between the FIX Protocol's OrdID, ClOrdID, OrigClOrdID?

I'm pretty new to the FIX protocol and was hoping someone could help clarify some terms.
In particular could someone explain (perhaps with an example) the flow of NewOrderSingle, ExecutionReport, CancelReplaceRequest and how the fields ClOrdID, OrdID, OrigClOrdID are used within those messages?
A quick note about usages of fields. My experience is that many who implement FIX do it slightly differently. So be aware that though I am trying to explain correct usage you may find that there are differences between implementations. When I connect to a new broker I get a FIX specification which details exactly how they use the protocol. I have to be very careful to make sure where they have deviated from other implementations.
That said I will give you a rundown of what you have asked for.
There are more complicated orders but NewOrderSingle is the one most used. It allows you to create a trade for any asset. You will need to create a new order using this object / msg type. Then you will send it through your session using the method sendToTarget(). You can modify the message after this point through the toApp() method, assuming your application implements the quickfix.Application interface.
The broker (or whoever you are connected to) will send you a reply in the form of and Execution report. Using quickfix that reply will enter your application through the fromApp() callback. From there the best thing to do is to implement your app inheriting from the MessageCracker class (or implement it elsewhere) using the crack method from MessageCracker it will then call back a relevant onMessage() method call. You will need to implement a number of these onMessage() methods (it depends on specifically what you are doing as to which methods you will need), the main one being onMessage(ExecutionReport msg, SessionID session). This method will be called by message cracker when you receive and Execution report from the broker. This is the standard reply to a new order.
From there you handle the reply as required.
Some orders do not get filled immediately like Limit orders. They can be changed. For that you will need the CancelReplaceRequest. Your broker will give you details of how to do this specifically for them (again there are differences and not everyone does it the same). You will have to have done a NewOrderSingle first and then you will use this MsgType to update it.
ClOrdID is an ID that the client uses to identify the order. It is sent with the NewOrderSingle and returned in the ExecutionReport. The OrdID tag is in the ExecutionReport message, it is the ID that the broker will use to identify the order. OrgClOrdID is usually used to identify the original order in when you do and update (using CancelReplaceRequest), it is supposed to contain the ClOrdID of the original order. Some brokers want the original order only, others want the ClOrdID of the last update, so the first OrigClOrdID or will be the ClOrdID of the NewOrderSingle, then if there are subsequent updates to the same order then they will be the ClOrderID from the last CancelReplaceRequest. Some brokers want the last OrderID and not ClOrderID. Note that the CancelReplaceRequest will require a ClOrdID as well.