I am implementing a client to connect to a server which as far as I can tell uses a hybrid of FIX4.2 and FIX4.4.
The server sends group 453 (NoPartyIDs) with fields in a non-standard order when some events occur.
According to the specification document, the first field should be PartyID (448). With certain messages, the first field in the group is PartyIDSource (447) and the message is rejected. PartyIDSource is the second field in the group as per the specification.
I get the following error:
<event> Message 140 Rejected: Group 453's first entry does not start with delimiter 448 (Field=453)
From the documentation and trial and error, I cannot find a way through this issue. Amongst a few guesses, I have tried adding field 447 as the first (non-required) field in the group definition in the data-dictionary. I have also set ValidateFieldsOutOfOrder to N in the config.
Is there something I can do to not reject and process the message?
Relevant documentation:
Groups are a little more nuanced than other parts of the Data Dictionary.
A group is defined within a message, with the group tag. The first child element of the group tag is the group-counter tag, followed by the other fields in the group in the order in which they should appear in the message.
ValidateFieldsOutOfOrder is not relevant here, so you can take that out.
If I understand you correctly, you're saying that
sometimes 447 comes before 448
but at other times 448 comes before 447
If this is true, then unfortunately your counterparty is being really stupid. Per the FIX spec, the order of fields in repeating groups is supposed to be in a consistent order. (And also, the first field of the each group-sequence is always required to be present.) If they're flip-flopping fields, they're violating FIX.
If the order was consistent, you would just edit your DD to change the order, and it sounds like you tried that. But if your counterparty is flip-flopping, then your DD will always be wrong part of the time.
I don't have a good answer for you. QF/n is not designed to handle all the ways that counterparties do FIX wrong (nor should it be).
Your counterparty's implementation is sloppy. Try contacting their support and seeing if they'll fix it?
Related
May be title is not very clear but I'll try to explain.
There are two collections in mongo:
groups
users
Groups are created by users.
UI sends a /groups/1/10 to read first 10 groups. We don't want to return groups whose creators(users) are deleted.
Example:
UI makes call: /groups/1/10
Let us say only 8 records are available because 2 users are deleted from the system, hence their groups are not available.
What should we do?
Should UI make another request like: /groups/1/2 ?
Should we read let's say 20 groups, read first 10 available groups and return them? This may not be very good for second or third pages.
There is not enough information here to give a specific answer, specifically we need to know more about the schema that you are using. We'll try to give some general details that might point things in the right direction. We are also assuming that your endpoints are structured as /groups/<pageNumber>/<pageSize>.
Broadly speaking, if the client calls /groups/1/10 and there are (at least) 10 valid matching results, then the system should return 10 results.
It's not clear what you mean when you say:
only 8 records are available because 2 users are deleted from the system, hence their groups are not available ... Should UI make another request like: /groups/1/2 ?
The first part of that statement implies that there are only 8 valid results, but the second part implies that there are at least 2 more valid results that can be retrieved. If there are 10 valid results then they should be all get returned.
How you accomplish this depends on how invalid groups and/or deleted users are represented in your system. If, for example, the documents in your groups collection have some sort of valid field that becomes false when the user who created it gets deleted, then you should be applying a filter to remove those results such as:
db.groups.find({ valid: true }).limit(10)
If instead the groups have a document that references the user who created it, then you may need to do something a bit more complex. That may be along the lines of doing an aggregation that does a $lookup on the users collection and then perform a subsequent $match to remove the groups from the results whose creators have been deleted.
While there are many approaches to this problem, the only one that I would consider incorrect would be to force the client to perform the group validity check and/or force the client to make multiple requests.
I'm attempting to implement a FIX protocol in .NET with QuickFIX in order to automatically send out trade orders. Sending orders with the OrderQty tag doesn't raise any issues, however when using the CashOrderQty tag, the host returns the error message "Conditionally Required Field Missing". The message already includes all the specified fields required for CashOrderQty, the error only disappears if I add OrderQty to the message, however the documentation explicitly states only one of the two must be sent in the message.
I would agree with the earlier comments - it seems to be a question to your counter-party, no issue with QuickFix/n as such. It (apparently) correctly delivers the trade order message to the exchange, and the exchange's response back to you, so only the maintainer of the exchange's documentation could explain the reasons for the behavior your see.
Check FIX.xml dictionary on your side, it should match the third party documentation in terms of required and optional fields + supported fields.
Could anyone point me to the relevant section of the FIX spec pertaining to rejected order modification?
Please consider the following scenario:
A limit order (NewOrderSingle: ClOrdID='blah.0') is placed and
confirmed as submitted by the broker
Modification request of the order (OrderCancelReplaceRequest:
ClOrdID='blah.1'; OrigClOrdID='blah.0') gets rejected due to, say,
limit violation
What happens to the original order (ClOrdID='blah.0')? Is it still considered valid and can be filled? Does the FIX specification define the expected behavior for such scenarios and the expected state of the original order?
TL;DR
You should consult your counterparty's FIX specification document(s) for the exact behavior to expect from that specific counterparty when an attempt to replace a working order is rejected.
Long answer
Assuming nothing has happened to the original order 11=blah.0 between the time it was placed and the OrderCancelReplaceRequest with 11=blah.1|41=blah.0 was sent and rejected (e.g., fill, partial fill(s), external cancel), the original order 11=blah.0 should still be working, and can be filled.
There is nothing in the FIX specification that states the exact expected outcome when an attempt to replace a working order is rejected. Since most exchanges/brokers use some flavor of FIX 4.2, I'll point to the documentation for that version:
Order Cancel Reject - The order cancel reject message is issued by the
broker upon receipt of a cancel request or cancel/replace request
message which cannot be honored. Requests to change price or decrease
quantity are executed only when an outstanding quantity exists. Filled
orders cannot be changed (i.e quantity reduced or price change.
However, the broker/sellside may support increasing the order quantity
on a currently filled order).
In the message specification it has:
Tag | Field Name | Req'd | Comments
39 | OrdStatus | Y | OrdStatus value after this cancel reject is applied.
Whatever the counterparty provides for OrdStatus in the OrderCancelReject message is the state of the original order. I have never run into any counterparty that cancels the original order when a replace request is rejected, but I suppose it's possible. If a counterparty does handle the situation this way, any documentation provided by the counterparty should clearly state so.
I'm pretty new to the FIX protocol and was hoping someone could help clarify some terms.
In particular could someone explain (perhaps with an example) the flow of NewOrderSingle, ExecutionReport, CancelReplaceRequest and how the fields ClOrdID, OrdID, OrigClOrdID are used within those messages?
A quick note about usages of fields. My experience is that many who implement FIX do it slightly differently. So be aware that though I am trying to explain correct usage you may find that there are differences between implementations. When I connect to a new broker I get a FIX specification which details exactly how they use the protocol. I have to be very careful to make sure where they have deviated from other implementations.
That said I will give you a rundown of what you have asked for.
There are more complicated orders but NewOrderSingle is the one most used. It allows you to create a trade for any asset. You will need to create a new order using this object / msg type. Then you will send it through your session using the method sendToTarget(). You can modify the message after this point through the toApp() method, assuming your application implements the quickfix.Application interface.
The broker (or whoever you are connected to) will send you a reply in the form of and Execution report. Using quickfix that reply will enter your application through the fromApp() callback. From there the best thing to do is to implement your app inheriting from the MessageCracker class (or implement it elsewhere) using the crack method from MessageCracker it will then call back a relevant onMessage() method call. You will need to implement a number of these onMessage() methods (it depends on specifically what you are doing as to which methods you will need), the main one being onMessage(ExecutionReport msg, SessionID session). This method will be called by message cracker when you receive and Execution report from the broker. This is the standard reply to a new order.
From there you handle the reply as required.
Some orders do not get filled immediately like Limit orders. They can be changed. For that you will need the CancelReplaceRequest. Your broker will give you details of how to do this specifically for them (again there are differences and not everyone does it the same). You will have to have done a NewOrderSingle first and then you will use this MsgType to update it.
ClOrdID is an ID that the client uses to identify the order. It is sent with the NewOrderSingle and returned in the ExecutionReport. The OrdID tag is in the ExecutionReport message, it is the ID that the broker will use to identify the order. OrgClOrdID is usually used to identify the original order in when you do and update (using CancelReplaceRequest), it is supposed to contain the ClOrdID of the original order. Some brokers want the original order only, others want the ClOrdID of the last update, so the first OrigClOrdID or will be the ClOrdID of the NewOrderSingle, then if there are subsequent updates to the same order then they will be the ClOrderID from the last CancelReplaceRequest. Some brokers want the last OrderID and not ClOrderID. Note that the CancelReplaceRequest will require a ClOrdID as well.
I'm implementing a RESTful API which exposes Orders as a resource and supports pagination through the resultset:
GET /orders?start=1&end=30
where the orders to paginate are sorted by ordered_at timestamp, descending. This is basically approach #1 from the SO question Pagination in a REST web application.
If the user requests the second page of orders (GET /orders?start=31&end=60), the server simply re-queries the orders table, sorts by ordered_at DESC again and returns the records in positions 31 to 60.
The problem I have is: what happens if the resultset changes (e.g. a new order is added) while the user is viewing the records? In the case of a new order being added, the user would see the old order #30 in first position on the second page of results (because the same order is now #31). Worse, in the case of a deletion, the user sees the old order #32 in first position on the second page (#31) and wouldn't see the old order #31 (now #30) at all.
I can't see a solution to this without somehow making the RESTful server stateful (urg) or building some pagination intelligence into each client... What are some established techniques for dealing with this?
For completeness: my back-end is implemented in Scala/Spray/Squeryl/Postgres; I'm building two front-end clients, one in backbone.js and the other in Python Django.
The way I'd do it, is to make the indices from old to new. So they never change. And then when querying without any start parameter, return the newest page. Also the response should contain an index indicating what elements are contained, so you can calculate the indices you need to request for the next older page. While this is not exactly what you want, it seems like the easiest and cleanest solution to me.
Initial request: GET /orders?count=30 returns:
{
"start"=1039;
"count"=30;
...//data
}
From this the consumer calculates that he wants to request:
Next requests: GET /orders?start=1009&count=30 which then returns:
{
"start"=1009;
"count"=30;
...//data
}
Instead of raw indices you could also return a link to the next page:
{
"next"="/orders?start=1009&count=30";
}
This approach breaks if items get inserted or deleted in the middle. In that case you should use some auto incrementing persistent value instead of an index.
The sad truth is that all the sites I see have pagination "broken" in that sense, so there must not be an easy way to achieve that.
A quick workaround could be reversing the ordering, so the position of the items is absolute and unchanging with new additions. From your front page you can give the latest indices to ensure consistent navigation from up there.
Pros: same url gives the same results
Cons: there's no evident way to get the latest elements... Maybe you could use negative indices and redirect the result page to the absolute indices.
With a RESTFUL API, Application state should be in the client. Here the application state should some sort of time stamp or version number telling when you started looking at the data. On the server side, you will need some form of audit trail, which is properly server data, as it does not depend on whether there have been clients and what they have done. At the very least, it should know when the data last changed. No contradiction with REST here.
You could add a version parameter to your get. When the client first requires a page, it normally does not send a version. The server replies contains one. For instance, if there are links in the reply to next/other pages, those links contains &version=... The client should send the version when requiring another page.
When the server recieves some request with a version, it should at least know whether the data have changed since the client started looking and, dependending of what sort of audit trail you have, how they have changed. If they have not, it answer normally, transmitting the same version number. If they have, it may at least tell the client. And depending how much it knows on how the data have changed, it may taylor the reply accordingly.
Just as an example, suppose you get a request with start, end, version, and that you know that since version was up to date, 3 rows coming before start have been deleted. You might send a redirect with start-3, end-3, new version.
WebSockets can do this. You can use something like pusher.com to catch realtime changes to your database and pass the changes to the client. You can then bind different pusher events to work with models and collections.
Just Going to throw it out there. Please feel free to tell me if it's completely wrong and why so.
This approach is trying to use a left_off variable to sort through without using offsets.
Consider you need to make your result Ordered by timestamp order_at DESC.
So when I ask for first result set
it's
SELECT * FROM Orders ORDER BY order_at DESC LIMIT 25;
right?
This is the case when you ask for the first page (in terms of URL probably the request that doesn't have any
yoursomething.com/orders?limit=25&left_off=$timestamp
Then When receiving your data set. just grab the timestamp of last viewed item. 2015-12-21 13:00:49
Now to Request next 25 items go to: yoursomething.com/orders?limit=25&left_off=2015-12-21 13:00:49 (to lastly viewed timestamp)
In Sql you would just make the same query and say where timestamp is equal or less than $left_off
SELECT * FROM (SELECT * FROM Orders ORDER BY order_at DESC) as a
WHERE a.order_at < '2015-12-21 13:00:49' LIMIT 25;
You should get a next 25 items from the last seen item.
For those who sees this answer. Please comment if this approach is relevant or even possible in the first place. Thank you.