The FIX messages are being ‘truncated’ when requesting the ICE TradeCaptureReport message. This results in the fact that we are not consuming the ‘repeating groups’ of the message (like parties, leg details).
We haven't used any data dictionary. Any solution to resolve this issue.
The solution is simple: use the data dictionary.
Otherwise the FIX engine does not know how to parse repeating groups. It needs to know the delimiter tag to know where one instance of a repeating group starts.
Here is an explanation (you probably can ignore the code examples): https://ref.onixs.biz/cpp-fix-engine-guide/group__fix-protocol-repeating-group.html
Related
One key difference often highlighted between Events and Commands in EDA is as follows:
Events are something which has happened
Commands are requests for something which might happen
What I can't understand is why implementations often use both of these together, when one always seems to be redundant? For example when we need to check if a customer has enough credit in order to complete an order, we can achieve this purely with events:
There are no commands on this diagram whatsoever. But in this article, it's suggested there are commands created in addition to the events behind the scenes:
What is the benefit of also including Commands here, doesn't it just add complexity? And which of the two is the Customer Service actually subscribing to, the CreatePendingOrderCommand, or the OrderCreatedEvent? Surely only one of these is acted upon by the Customer Service?
What I can't understand is why implementations often use both of these together, when one always seems to be redundant?
In general, they aren't quite the same thing; it's only in the simple cases that the information is redundant.
"Commands" a something akin to proposals: they are messages that we send toward some authority to effect some change. "Events" are messages being sent from some authority.
Commands deliver new information to an authority. Events describe how information has been integrated with what was known before.
Events describe information that will be available when processing future commands - the information is durable; commands are transient.
Commands are often generated from a stale non-authoritive snapshot of some information (a report, or a "view"). Events are reflections of the state of the authority itself.
Events fan out from an authority; we know the sender, but not necessarily the receiver. Commands fan into an authority, we know the receiver, but not necessarily the sender.
It is pretty squishy. We are making copies of a data structure, and at some point our perspective shifts, and even though the source data structure is an event, the copy is a command.
Think subscription: the system is going to copy a data structure from my output stream (an event) to your input stream (a command).
My suggestion: it's all just "messages". Allow yourself to leave it there until you have more laps under your belt.
I would say
"A command can emit any number of events."
"Commands can be rejected."
"Events have happened."
We are using Lagom for developing our set of microservices. The trick here is that although we are using event sourcing and persisting events into cassandra but we have to store the data in one of the graph DB as well since it will be the one that will be serving most of the queries because of the use case.
As per the Lagom's documentation, all the insertion into Graph database(or any other database) has to be done in ReadSideProcecssor after the command handler persist the events into cassandra as followed by philosophy of CQRS.
Now here is the problem which we are facing. We believe that the ReadSideProcecssor is a listener which gets triggered after the events are generated and persisted. What we want is we could return the response back from the ReadSideProcecssor to the ServiceImpl. Example when a user is added to the system, the unique id generated by the graph has to be returned as one of the response headers. How that can be achieved in Lagom since the response is constructed from setCommandHandler and not the ReadSideProcessor.
Also, we need to make sure that if due to any error at graph side, the API should notify the client that the request has failed but again exceptions occuring in ReadSideProcessor are not propagated to either PersistentEntity or ServiceImpl class. How can that be achieved as well?
Any helps are much appreciated.
The read side processor is not a listener that is attached to the command - it is actually completely disconnected from the persistent entity, it may be running on a different node, at a different time, perhaps even years in the future if you add a new read side processor that first comes up to speed with all the old events in history. If the read side processor were connected synchronously to the command, then it would not be CQRS, there would not be segregation between the command and the query side.
Read side processors essentially poll the database for new events, processing them as they detect them. You can add a new read side processor at any time, and it will get all events from all of history, not just the new ones that are added, this is one of the great things about event sourcing, you don't need to anticipate all your query needs from the start, you can add them as the query need comes.
To further explain why you don't want a connection between the two - what happens if the event persist succeeds, but the update on the graph db fails? Perhaps the graph db is crashed. Does the command have to retry? Does the event have to be deleted? What happens if the node doing the update itself crashes before it has an opportunity to fix the problem? Now your read side is in an inconsistent state from your entities. Connecting them leads to inconsistency in many failure scenarios - for example, like when you update your address with a utility company, and but your bills still go to the old address, and you contact them, and they say "yes, your new address is updated in our system", but they still go to the old address - that's the sort of terrible user experience that you are signing your users up for if you try to connect your read side and write side together. Disconnecting allows Lagom to ensure consistency between the events you have emitted on the write side, and the consumption of them on the read side.
So to address your specific concerns: ID generation should be done on the write side, or, if a subsequent ID is generated on the read side, it should also provide a way of mapping the IDs on the write side to the read side ID. And as for handling errors on the read side - all validation should be done on the write side - the write side should ensure that it never emits an event that is invalid.
Now if the read side processor encounters something that is invalid, then it has two options. One option is it could fail. In many cases, this is a good option, since if something is invalid or inconsistent, then it's likely that either you have a bug or some form of corruption. What you don't want to do is continue processing as if everything is happy, since that might make the data corruption or inconsistency even worse. Instead the read side processor stops, your monitoring should then detect the error, and you can go in and work out either what the bug is or fix the corruption. Of course, there are downsides to doing this, your read side will start lagging behind the write side while it's unable to process new events. But that's also an advantage of CQRS - the write side is able to continue working, continue enforcing consistency, etc, the failure is just isolated to the read side, and only in updating the read side. Instead of your whole system going down and refusing to accept new requests due to this bug, it's isolated to just where the problem is.
The other option that the read side has is it can store the error somewhere - eg, store the event in a dead letter table, or raise some sort of trouble ticket, and then continue processing. This way, you can go and fix the event after the fact. This ensures greater availability, but does come at the risk that if that event that it failed to process was important to the processing of subsequent events, you've potentially just got yourself into a bigger mess.
Now this does introduce specific constraints on what you can and can't do, but I can't really anticipate those without specific knowledge of your use case to know how to address them. A common constraint is set validation - for example, how do you ensure that email addresses are unique to a single user in your system? Greg Young (the CQRS guy) wrote this blog post about those types of problems:
http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
We are currently working on a FIX connection, whereby data that should only be validated can be marked. It has been decided to mark this data with a specific TargetSubID. But that implies a new session.
Let's say we send the messages to the session FIX.4.4:S->T. If we then get a message that should only be validated with TargetSubID V, this implies the session FIX.4.4:S->T/V. If this Session is not configured, we get the error
Unknown session: FIX.4.4:S->T/V
and if we explicitly configure this session next to the other, there is the error
quickfix.Session – [FIX/Session] Disconnecting: Encountered END_OF_STREAM
what, as bhageera says, is that you log in with the same credentials.
(...) the counterparty I was connecting to allows only 1 connection
per user/password (i.e. session with those credentials) at a time.
I'm not a FIX expert, but I'm wondering if the TargetSubID is not just being misused here. If not, I would like to know how to do that. We develop the FIX client with camel-quickfix.
It depends a lot on what you system is like and what you want to achieve in the end.
Usually the dimensions to assess are:
maximising the flexibility
minimising the amount of additional logic required to support the testing
minimising the risk of bad things happening on accidental connection from a test to a prod environment (may happen, despite what you might think).
Speaking for myself, I would not use tags potentially involved in the sesson/routing behavior for testing unless all I need is routing features and my system reliably behaves the way I expect (probably not your case).
Instead I would consider one of these:
pick something from a user defined range (5000-9999)
use one of symbology tags (say Symbol(55)) corrupted in some reversible way (say "TEST_VOD.L" in the tag 55 instead of "VOD.L")
A tag from a custom range would give a lot of flexibility, a corrupted symbology tag would make sure a test order would bounce if sent to prod by accident.
For either solution you may potentially need a tag-based routing and transformation layer. Both are done in couple of hours in generic form if you are using something Java-based (I'd look towards javax.scripting / Nashorn).
It's up to the counterparties - sometimes Sender/TargetSubID are considered part of the unique connection, sometimes they distinguish messages on one connection.
Does your library have a configuration option to exclude the sub IDs from the connection lookups? e.g. in QuickFix you can set the SessionQualifier.
I'm hoping someone can suggest a good technique for sorting Gmail threads by date without needing to get details on potentially thousands of threads.
Right now I use threads.list to get a list of threads, and I'm using whatever order they're returned in. That's mostly correct in that it returns threads in reverse chronological order. Except that chronology is apparently determined by the first message in a thread rather than the thread's most recent message.
That's fine for getting new messages at the top of the list but it's not good if someone has a new reply to a thread that started a few days ago. I'd like to put that at the top of the list, since it's a new message. But threads.list leaves it sorted based on the first message.
I thought the answer might be to sort threads based on historyId. That does sort by the most recent message. But it's also affected by any other thread change. If the user updates the labels on a message (by starring it, for example), historyId changes. Then the message sorts to the top even though it's not new.
I could use threads.get to get details of the thread, and do more intelligent sorting based on that. But users might have thousands of threads and I don't want to have to make this call for every one of them.
Does anyone have a better approach? Something I've missed?
I'm not a developer and never used the API before but I just readed the API documentation and it doesn't seem to have the functionality you want.
Anyway, this is what I understood in your question:
You want is organize threads by the latest message in each one.
I thought you could use a combination of users.messages and threads.list. In users.messages you'll have the ThreadID:
threadId string The ID of the thread the message belongs to.
The method would be using the date of user.messages to organize the latest messages from newer to old, then recursively obtain their original threads by threadId and print the threads with threads.list by their latest received message.
With this method you'll avoid recursion in each thread saving resources and time.
I don't know how new messages are affected by labels or starring, you'll have to find out that.
I apologize in advance if my answer isn't correct or missleading.
I'm trying to understand the variety of message types and data fields in FIX protocol.
I did understand most of the things but I'm still not sure about Quotes and HandlInst.
When a dealer or a broker wants to trade in the market he have a list of all available products (e.g USD/EUR, USD/JPN, ...). each product has sell-price and buy-price which being update in high rate. Which message type generates this values? A Quote message type(quote request and quote response)?
A broker have the option to decide for each one of his dealers whether he automatically response as the counter-party for the dealer orders or the orders go out to the market for a trade. Which filed in the order message types indicates that mark? I was thinking about HandlInst <21> but i'm not quite sure...
Thanks for your help
These are vendor-specific questions. FIX just gives you a pile of messages and fields you can use, and some recommendations for how to use them. Some counterparties even follow those recommendations, but nearly all of them add their own weird customizations or use certain fields in weird ways.
If you are connecting to an external counterparty, you need to read their docs for their specific FIX interface. That will tell you which fields they use, how they use them, and what they expect from you.
So, get your counterparty's docs and read them.
Ok Kitsune, here are my answers for you (you should pay me for this !)
yes, Quote, and QuoteCancel (usually)
I think you are talking about an internal broker decision that's not to do with FIX. HandlInst can be used by a client sending their order to the broker to specify manual or automatic execution, but I think that is in specific limit order cases only, not the usual fill or kill stuff. If the order was $100mio then maybe a client would specify manual execution. Again, as in your other post, the ExecutionReport can specify manual or auto execution as well. Need to look at that...