I'm trying to understand the variety of message types and data fields in FIX protocol.
I did understand most of the things but I'm still not sure about Quotes and HandlInst.
When a dealer or a broker wants to trade in the market he have a list of all available products (e.g USD/EUR, USD/JPN, ...). each product has sell-price and buy-price which being update in high rate. Which message type generates this values? A Quote message type(quote request and quote response)?
A broker have the option to decide for each one of his dealers whether he automatically response as the counter-party for the dealer orders or the orders go out to the market for a trade. Which filed in the order message types indicates that mark? I was thinking about HandlInst <21> but i'm not quite sure...
Thanks for your help
These are vendor-specific questions. FIX just gives you a pile of messages and fields you can use, and some recommendations for how to use them. Some counterparties even follow those recommendations, but nearly all of them add their own weird customizations or use certain fields in weird ways.
If you are connecting to an external counterparty, you need to read their docs for their specific FIX interface. That will tell you which fields they use, how they use them, and what they expect from you.
So, get your counterparty's docs and read them.
Ok Kitsune, here are my answers for you (you should pay me for this !)
yes, Quote, and QuoteCancel (usually)
I think you are talking about an internal broker decision that's not to do with FIX. HandlInst can be used by a client sending their order to the broker to specify manual or automatic execution, but I think that is in specific limit order cases only, not the usual fill or kill stuff. If the order was $100mio then maybe a client would specify manual execution. Again, as in your other post, the ExecutionReport can specify manual or auto execution as well. Need to look at that...
Related
By opc-ua pubsub specifications, DataSetWriterId in DataSetMessage header is optional, and DataSetMessage ordering can be set to Undefined_0, which is when we can't guarantee an order of DataSetMessages in NetworkMessage. How to identify which DataSetMessage we received if there are multiple ones in the WriterGroup?
The OPC UA PubSub specification makes many things optional. Whoever configures the system is then responsible for making choices that make sense. What you have described is just one example of such case. The idea is that if you cannot identify what you need "on the wire", you should not do it. So if you wanted to leave out DataSetWriterId and there would be no other way of getting the same information, you then have to put just one enabled DataSetWriter into the WriterGroup.
There are/will be profiles ("header layouts") in the UA spec that will combine the reasonable and most commonly used options together, with the expectation that most systems will stick to one of the profiles (while still allowing to do something else as long as it conforms with the text of the spec and makes sense).
We are currently working on a FIX connection, whereby data that should only be validated can be marked. It has been decided to mark this data with a specific TargetSubID. But that implies a new session.
Let's say we send the messages to the session FIX.4.4:S->T. If we then get a message that should only be validated with TargetSubID V, this implies the session FIX.4.4:S->T/V. If this Session is not configured, we get the error
Unknown session: FIX.4.4:S->T/V
and if we explicitly configure this session next to the other, there is the error
quickfix.Session – [FIX/Session] Disconnecting: Encountered END_OF_STREAM
what, as bhageera says, is that you log in with the same credentials.
(...) the counterparty I was connecting to allows only 1 connection
per user/password (i.e. session with those credentials) at a time.
I'm not a FIX expert, but I'm wondering if the TargetSubID is not just being misused here. If not, I would like to know how to do that. We develop the FIX client with camel-quickfix.
It depends a lot on what you system is like and what you want to achieve in the end.
Usually the dimensions to assess are:
maximising the flexibility
minimising the amount of additional logic required to support the testing
minimising the risk of bad things happening on accidental connection from a test to a prod environment (may happen, despite what you might think).
Speaking for myself, I would not use tags potentially involved in the sesson/routing behavior for testing unless all I need is routing features and my system reliably behaves the way I expect (probably not your case).
Instead I would consider one of these:
pick something from a user defined range (5000-9999)
use one of symbology tags (say Symbol(55)) corrupted in some reversible way (say "TEST_VOD.L" in the tag 55 instead of "VOD.L")
A tag from a custom range would give a lot of flexibility, a corrupted symbology tag would make sure a test order would bounce if sent to prod by accident.
For either solution you may potentially need a tag-based routing and transformation layer. Both are done in couple of hours in generic form if you are using something Java-based (I'd look towards javax.scripting / Nashorn).
It's up to the counterparties - sometimes Sender/TargetSubID are considered part of the unique connection, sometimes they distinguish messages on one connection.
Does your library have a configuration option to exclude the sub IDs from the connection lookups? e.g. in QuickFix you can set the SessionQualifier.
I have a questions about FIX protocol.
I plan to send a PositionReport message without a PositionReportRequest message received. But i must fill a field, ClearingBusinessDate, in the PositionReport message that i do not know what is the purpose of that field. Altough the PositionReportRequest has that field, i will not get a request message before sending a report message. So, i have no idea what it should be. And the worst thing is it is a required field..
What should be the value of ClearingBusinessDate field?
Thanks
This is more a question for your counterparty than a question about general FIX protocol.
If you're connecting to an outside FIX counterparty (e.g. exchange, clearing firm, etc), they should have documentation on their interface that should describe what the expected fields and field values should be. If they don't have docs, then you'll have to ask them.
FIX is a very loose protocol. All of the messages and fields in the default message/field definitions are really just suggestions. In practice, most counterparties alter and mutilate these message/field definitions in numerous ways. They may add custom fields, change field types, make optional fields required and vice versa, remove fields, etc etc. I've never seen a counterparty not mess with the definitions.
(P.S. You have a very low answer-acceptance rate. Please consider going back over your past questions and accepting the best answers. You'll get rep points, and you'll make StackOverflow better.)
I'm creating v2 of an existing RESTful web api.
The responses are JSON lists of objects, roughly in the form:
[
{
name1=value1,
name2=value2,
},
{
name1=value3,
name2=value4,
}
]
One problem we've observed with v1 is that some clients will access fields by integer position, instead of by name. This means that if we decide to add fields to the response (which we had originally considered a compatibility-preserving change), then some of our client's code breaks, unless we add the fields at the end. Even then, other clients code breaks anyway, because they will fail in some way when they encounter an unexpected attribute name.
To counter this in v2, we are considering randomly reordering the fields in every response. This will force clients to index fields by name instead of by position.
Additionally, we are considering adding a randomly-named field to every response. This will force clients to ignore fields they don't recognize.
While this sounds somewhat barmy, it does have the advantage that we will be able to add new fields, safe in the knowledge that this isn't breaking any clients. This means we can issue compatible updates to v2.1, v2.3, etc at the same URL, and that means we will only have to maintain & support a smaller number of API versions.
The alternative is to issue compatibility-breaking v3, v4, at new URLs, which means that we will have to maintain & support many incompatible API versions, which will stretch us that little bit thinner.
Is this a bad idea, and if so, why? Are there any other similar ideas I should think about?
Update: The first couple of responses are pointing out that if I document the issue (i.e. indicate in the docs that fields may be added or reordered) then I am no longer to blame if client code breaks when I subsequently add or reorder fields. Sadly I don't think this is an appropriate option for us: Many dozens of organisations rely on the functionality of our APIs for real-world transactions with substantial financial impact. These organisations are not technically oriented - and the resulting implementations at the client end cover the whole spectrum of technical proficiency. We already did document that fields may get added or reordered in the docs for v1, and that clearly didn't work, because now we're having to issue v2 because many clients, due to lack of time or experience or ability, still wrote code that breaks when we add new fields. If I were now to add fields to the interface, it breaks a dozen different company's interfaces to us, which means they (and us) are bleeding money every minute. If I were to refuse to revert the change or fix it, saying "They should have read the docs!", then I will soon be out of the job, and rightly so. We may attempt to educate the 'failing' partners, but this is doomed to fail as the problem gets larger every month as we continue to grow. My question is, can I systemically head the whole issue off at the pass, preventing this situation from ever arising, no matter what clients try to do? If the techniques I suggest would work, why shouldn't I use them? Why isn't everyone else using them?
If you want your media types to be "evolvable", make that point very clear in the documentation. Similarly, if the placement order of fields is not guaranteed, make that explicitly clear too. If you supply sample code for your API, make sure it does not rely on field ordering.
However, even assuming that you have to maintain different versions of your media types, you don't have to version the URI. REST gives you the ability to maintain the same version-agnostic URI but use HTTP content negotiation (via the Accept and Content-Type headers) to offer different payloads at the same URI.
Therefore any client that doesn't explicitly wish to accept your new v2/v3/etc encoding won't get it. By default, you can return the old v1 encoding with the original field ordering and all of those brittle client apps will work fine. However, new client developers will know (thanks to your documentation) to indicate via Accept that they are willing and able to see the new fields and they don't care about their order. Best of all, you can (and should) use the same URI throughout. Remember - different payloads like this are just different representations of the same underlying resource, so the URI should be the same.
I've decided to run with the described techniques, to the max. I haven't heard any objections to them that hold any water for me. Brian's answer, about re-using the same URI for different API versions, is a solid and much-appreciated complementary idea (with upvote), but I can't award it 'best answer' because it doesn't get to the core of my original question.
I've just discovered an assumption in my use of SimpleDB. I suspect it's safe but would like other opinions since the docs don't seem to cover it.
So say Process 1 stores an item with x attributes. When Process 2 tries to access said item (without consistent read) & finds it, is it guaranteed to have all the attributes stored by Process 1?
I'm excluding the possibility that another process could have changed the data.
I also know that Process 2 has no guarantee of seeing the item unless consistent read is used, I'm just talking about the point when it does eventually see it.
I guess the question is, once I can get an item & am not changing it anywhere else can I assume it has an ad-hoc fixed schema and access all my expected attributes without checking they actually exist?
I don't want to be in a situation where I need to keep requesting items until they have all the attributes I need to use them.
Thanks.
Although Amazon makes no such guarantees in the documentation the current implementation of their eventual consistency guarantees that you'll see all the properties stored by Process 1 or none of them.
See this thread over at the AWS forums and more specifically this answer by an Amazon employee confirming the behavior (emphasis mine).
I don't think we make that guarantee in the documentation, but the
current implementation treats each Put request as a bundle. It won't
split the request up and apply the operations piecemeal. You'll get
either step-1 responses or step-2 responses until eventual consistency
shakes out and leaves you with step-2 responses.
While this is undocumented behavior I suspect quite a few SimpleDB users are relying on it now and as such Amazon won't be likely to change it anytime soon, but that's ju my guess.