I'm attempting to implement a FIX protocol in .NET with QuickFIX in order to automatically send out trade orders. Sending orders with the OrderQty tag doesn't raise any issues, however when using the CashOrderQty tag, the host returns the error message "Conditionally Required Field Missing". The message already includes all the specified fields required for CashOrderQty, the error only disappears if I add OrderQty to the message, however the documentation explicitly states only one of the two must be sent in the message.
I would agree with the earlier comments - it seems to be a question to your counter-party, no issue with QuickFix/n as such. It (apparently) correctly delivers the trade order message to the exchange, and the exchange's response back to you, so only the maintainer of the exchange's documentation could explain the reasons for the behavior your see.
Check FIX.xml dictionary on your side, it should match the third party documentation in terms of required and optional fields + supported fields.
I have a question regarding FIX Protocol and I'm quite new to it.
A client sent an order and it got accepted by the broker. After that, the client sent a request (request #1) to modify the quantity to x. However, before request #1 is accepted, the client sent another modification request (request #2) to modify the quantity to y.
I search the documentation of FIX Protocol and find these.
The order sender should chain client order ids on an ‘optimistic' basis, i.e. set the OrigClOrdID <41> to the last non rejected ClOrdID <11> sent
The order receiver should chain client order ids on a ‘pessimistic' basis, i.e. set the OrigClOrdID <41> on execution reports that convey the receipt or succesful application of a cancel/replace and Order Cancel Reject <9> messages to be the last ‘accepted' ClOrdID <11> (See "Order State Change Matrices" for examples of this)
However, I still don't understand how FIX Protocol handles request. Will the quantity be modified to y? Or does it depend on which request is being accepted last?
You need to understand that the FIX Protocol is a set of guidelines on how someone who implements the protocol should handle certain scenarios. In practice there are differences in how counterparties handle this.
In your example the quantity should be modified to y in the end. But the quantity will be modified to x first since that message was received at first.
Here are some chaining examples taken from the spec:
https://www.onixs.biz/fix-dictionary/4.4/app_d.html (search for sequencing or chaining)
Here are examples specific to your question where two consecutive replace requests are handled:
https://www.onixs.biz/fix-dictionary/4.4/app_dD.2.a.html
https://www.onixs.biz/fix-dictionary/4.4/app_dD.2.b.html
Could anyone point me to the relevant section of the FIX spec pertaining to rejected order modification?
Please consider the following scenario:
A limit order (NewOrderSingle: ClOrdID='blah.0') is placed and
confirmed as submitted by the broker
Modification request of the order (OrderCancelReplaceRequest:
ClOrdID='blah.1'; OrigClOrdID='blah.0') gets rejected due to, say,
limit violation
What happens to the original order (ClOrdID='blah.0')? Is it still considered valid and can be filled? Does the FIX specification define the expected behavior for such scenarios and the expected state of the original order?
TL;DR
You should consult your counterparty's FIX specification document(s) for the exact behavior to expect from that specific counterparty when an attempt to replace a working order is rejected.
Long answer
Assuming nothing has happened to the original order 11=blah.0 between the time it was placed and the OrderCancelReplaceRequest with 11=blah.1|41=blah.0 was sent and rejected (e.g., fill, partial fill(s), external cancel), the original order 11=blah.0 should still be working, and can be filled.
There is nothing in the FIX specification that states the exact expected outcome when an attempt to replace a working order is rejected. Since most exchanges/brokers use some flavor of FIX 4.2, I'll point to the documentation for that version:
Order Cancel Reject - The order cancel reject message is issued by the
broker upon receipt of a cancel request or cancel/replace request
message which cannot be honored. Requests to change price or decrease
quantity are executed only when an outstanding quantity exists. Filled
orders cannot be changed (i.e quantity reduced or price change.
However, the broker/sellside may support increasing the order quantity
on a currently filled order).
In the message specification it has:
Tag | Field Name | Req'd | Comments
39 | OrdStatus | Y | OrdStatus value after this cancel reject is applied.
Whatever the counterparty provides for OrdStatus in the OrderCancelReject message is the state of the original order. I have never run into any counterparty that cancels the original order when a replace request is rejected, but I suppose it's possible. If a counterparty does handle the situation this way, any documentation provided by the counterparty should clearly state so.
I have been using POST in a REST API to create objects. Every once in a while, the server will create the object, but the client will be disconnected before it receives the 201 Created response. The client only sees a failed POST request, and tries again later, and the server happily creates a duplicate object...
Others must have had this problem, right? But I google around, and everyone just seems to ignore it.
I have 2 solutions:
A) Use PUT instead, and create the (GU)ID on the client.
B) Add a GUID to all objects created on the client, and have the server enforce their UNIQUE-ness.
A doesn't match existing frameworks very well, and B feels like a hack. How does other people solve this, in the real world?
Edit:
With Backbone.js, you can set a GUID as the id when you create an object on the client. When it is saved, Backbone will do a PUT request. Make your REST backend handle PUT to non-existing id's, and you're set.
Another solution that's been proposed for this is POST Once Exactly (POE), in which the server generates single-use POST URIs that, when used more than once, will cause the server to return a 405 response.
The downsides are that 1) the POE draft was allowed to expire without any further progress on standardization, and thus 2) implementing it requires changes to clients to make use of the new POE headers, and extra work by servers to implement the POE semantics.
By googling you can find a few APIs that are using it though.
Another idea I had for solving this problem is that of a conditional POST, which I described and asked for feedback on here.
There seems to be no consensus on the best way to prevent duplicate resource creation in cases where the unique URI generation is unable to be PUT on the client and hence POST is needed.
I always use B -- detection of dups due to whatever problem belongs on the server side.
Detection of duplicates is a kludge, and can get very complicated. Genuine distinct but similar requests can arrive at the same time, perhaps because a network connection is restored. And repeat requests can arrive hours or days apart if a network connection drops out.
All of the discussion of identifiers in the other anwsers is with the goal of giving an error in response to duplicate requests, but this will normally just incite a client to get or generate a new id and try again.
A simple and robust pattern to solve this problem is as follows: Server applications should store all responses to unsafe requests, then, if they see a duplicate request, they can repeat the previous response and do nothing else. Do this for all unsafe requests and you will solve a bunch of thorny problems. Repeat DELETE requests will get the original confirmation, not a 404 error. Repeat POSTS do not create duplicates. Repeated updates do not overwrite subsequent changes etc. etc.
"Duplicate" is determined by an application-level id (that serves just to identify the action, not the underlying resource). This can be either a client-generated GUID or a server-generated sequence number. In this second case, a request-response should be dedicated just to exchanging the id. I like this solution because the dedicated step makes clients think they're getting something precious that they need to look after. If they can generate their own identifiers, they're more likely to put this line inside the loop and every bloody request will have a new id.
Using this scheme, all POSTs are empty, and POST is used only for retrieving an action identifier. All PUTs and DELETEs are fully idempotent: successive requests get the same (stored and replayed) response and cause nothing further to happen. The nicest thing about this pattern is its Kung-Fu (Panda) quality. It takes a weakness: the propensity for clients to repeat a request any time they get an unexpected response, and turns it into a force :-)
I have a little google doc here if any-one cares.
You could try a two step approach. You request an object to be created, which returns a token. Then in a second request, ask for a status using the token. Until the status is requested using the token, you leave it in a "staged" state.
If the client disconnects after the first request, they won't have the token and the object stays "staged" indefinitely or until you remove it with another process.
If the first request succeeds, you have a valid token and you can grab the created object as many times as you want without it recreating anything.
There's no reason why the token can't be the ID of the object in the data store. You can create the object during the first request. The second request really just updates the "staged" field.
Server-issued Identifiers
If you are dealing with the case where it is the server that issues the identifiers, create the object in a temporary, staged state. (This is an inherently non-idempotent operation, so it should be done with POST.) The client then has to do a further operation on it to transfer it from the staged state into the active/preserved state (which might be a PUT of a property of the resource, or a suitable POST to the resource).
Each client ought to be able to GET a list of their resources in the staged state somehow (maybe mixed with other resources) and ought to be able to DELETE resources they've created if they're still just staged. You can also periodically delete staged resources that have been inactive for some time.
You do not need to reveal one client's staged resources to any other client; they need exist globally only after the confirmatory step.
Client-issued Identifiers
The alternative is for the client to issue the identifiers. This is mainly useful where you are modeling something like a filestore, as the names of files are typically significant to user code. In this case, you can use PUT to do the creation of the resource as you can do it all idempotently.
The down-side of this is that clients are able to create IDs, and so you have no control at all over what IDs they use.
There is another variation of this problem. Having a client generate a unique id indicates that we are asking a customer to solve this problem for us. Consider an environment where we have a publicly exposed APIs and have 100s of clients integrating with these APIs. Practically, we have no control over the client code and the correctness of his implementation of uniqueness. Hence, it would probably be better to have intelligence in understanding if a request is a duplicate. One simple approach here would be to calculate and store check-sum of every request based on attributes from a user input, define some time threshold (x mins) and compare every new request from the same client against the ones received in past x mins. If the checksum matches, it could be a duplicate request and add some challenge mechanism for a client to resolve this.
If a client is making two different requests with same parameters within x mins, it might be worth to ensure that this is intentional even if it's coming with a unique request id.
This approach may not be suitable for every use case, however, I think this will be useful for cases where the business impact of executing the second call is high and can potentially cost a customer. Consider a situation of payment processing engine where an intermediate layer ends up in retrying a failed requests OR a customer double clicked resulting in submitting two requests by client layer.
Design
Automatic (without the need to maintain a manual black list)
Memory optimized
Disk optimized
Algorithm [solution 1]
REST arrives with UUID
Web server checks if UUID is in Memory cache black list table (if yes, answer 409)
Server writes the request to DB (if was not filtered by ETS)
DB checks if the UUID is repeated before writing
If yes, answer 409 for the server, and blacklist to Memory Cache and Disk
If not repeated write to DB and answer 200
Algorithm [solution 2]
REST arrives with UUID
Save the UUID in the Memory Cache table (expire for 30 days)
Web server checks if UUID is in Memory Cache black list table [return HTTP 409]
Server writes the request to DB [return HTTP 200]
In solution 2, the threshold to create the Memory Cache blacklist is created ONLY in memory, so DB will never be checked for duplicates. The definition of 'duplication' is "any request that comes into a period of time". We also replicate the Memory Cache table on the disk, so we fill it before starting up the server.
In solution 1, there will be never a duplicate, because we always check in the disk ONLY once before writing, and if it's duplicated, the next roundtrips will be treated by the Memory Cache. This solution is better for Big Query, because requests there are not imdepotents, but it's also less optmized.
HTTP response code for POST when resource already exists
I'm pretty new to the FIX protocol and was hoping someone could help clarify some terms.
In particular could someone explain (perhaps with an example) the flow of NewOrderSingle, ExecutionReport, CancelReplaceRequest and how the fields ClOrdID, OrdID, OrigClOrdID are used within those messages?
A quick note about usages of fields. My experience is that many who implement FIX do it slightly differently. So be aware that though I am trying to explain correct usage you may find that there are differences between implementations. When I connect to a new broker I get a FIX specification which details exactly how they use the protocol. I have to be very careful to make sure where they have deviated from other implementations.
That said I will give you a rundown of what you have asked for.
There are more complicated orders but NewOrderSingle is the one most used. It allows you to create a trade for any asset. You will need to create a new order using this object / msg type. Then you will send it through your session using the method sendToTarget(). You can modify the message after this point through the toApp() method, assuming your application implements the quickfix.Application interface.
The broker (or whoever you are connected to) will send you a reply in the form of and Execution report. Using quickfix that reply will enter your application through the fromApp() callback. From there the best thing to do is to implement your app inheriting from the MessageCracker class (or implement it elsewhere) using the crack method from MessageCracker it will then call back a relevant onMessage() method call. You will need to implement a number of these onMessage() methods (it depends on specifically what you are doing as to which methods you will need), the main one being onMessage(ExecutionReport msg, SessionID session). This method will be called by message cracker when you receive and Execution report from the broker. This is the standard reply to a new order.
From there you handle the reply as required.
Some orders do not get filled immediately like Limit orders. They can be changed. For that you will need the CancelReplaceRequest. Your broker will give you details of how to do this specifically for them (again there are differences and not everyone does it the same). You will have to have done a NewOrderSingle first and then you will use this MsgType to update it.
ClOrdID is an ID that the client uses to identify the order. It is sent with the NewOrderSingle and returned in the ExecutionReport. The OrdID tag is in the ExecutionReport message, it is the ID that the broker will use to identify the order. OrgClOrdID is usually used to identify the original order in when you do and update (using CancelReplaceRequest), it is supposed to contain the ClOrdID of the original order. Some brokers want the original order only, others want the ClOrdID of the last update, so the first OrigClOrdID or will be the ClOrdID of the NewOrderSingle, then if there are subsequent updates to the same order then they will be the ClOrderID from the last CancelReplaceRequest. Some brokers want the last OrderID and not ClOrderID. Note that the CancelReplaceRequest will require a ClOrdID as well.