FIX order tracking - quickfix

With respect to FIX 4.2 or greater:
Q1.a. How are incoming and outgoing sequence #’s correlated/linked? Is there a buyer specific FIX tag a buyer can embed/use explicitly for tracking upon submitting a buy order that is also included in subsequent incoming status message sequences from the broker?
Q1.b. If not, then how does a buyer manage/track individually several IOC buy orders that are submitted in quick succession or concurrently of securities which may or may not be identical, at different price levels, where units or shares are “filled” at varying rates?

Q1.a. How are incoming and outgoing sequence #’s correlated/linked?
They are not linked (i.e. they are independant). Any FIX application/engine (such as the QuickFIX family) maintains two sequence numbers per session, one for incoming and one for outgoing. See also this answer on Stack Overflow which pretty much tells you the same.
When using an engine like any of the QuickFIX family (QuickFIX, QuickFIX/J, QuickFIX/N), these will be managed for you and apart from some configuration vis-a-vis your counterparty you should not bother about managing these.
Q1.a. Is there a buyer specific FIX tag a buyer can embed/use explicitly for tracking upon submitting a buy order that is also included in subsequent incoming status message sequences from the broker?
These tags are already present in e.g. the FIX Order Single message (D) - ClOrdId:
Unique identifier for Order as assigned by the buy-side (institution, broker, intermediary etc.) [...]. Uniqueness must be guaranteed within a single trading day. Firms, particularly those which electronically submit multi-day orders, trade globally or throughout market close periods, should ensure uniqueness across days, for example by embedding a date within the ClOrdID field.
This field is mandatory when creating a new order using FIX Order Single, and is used the refer to the order in subsequent messaging (e.g. Execution Report, or Status messages).
Note that the ClOrdId changes when an order is changed using an Order Cancel/Replace Request <G>, i.e. you assign a new ClOrdId to the order when changing or canceling it.

Related

How to communicate between order placing system and trade matching system

Order Placing System
A user places an order and the corresponding amount is kept on hold for the order and an order is created. This order is then pushed to some queue to be used by trade matching system. User gets back a reference order id for the order placed in return to the API call.
Trade Matching System
The system feeds on data from the queue generated by order placing system and looks for possible match and if possible to execute, executes them and push to another queue.
User Notification System
The system fetches data from the executed queue and broadcasts it to the user it belonged to. User can also fetch status of the order from the reference id which was shared on first API call
These two systems are right now communicating indirectly via a queue. Now the requirement is, in order placing system, when a user places order, along with order id, we also need to return execution status (i.e. Whether it got executed or not, if yes, rate and fee charged etc).
What should be mode of communication between order placing and trade matching system to make it possible to return execution details in first api call itself ?
Challenges
Matching System being single threaded, we cannot merge it with order engine
Polling and waiting for execution from execution queue, will probably make our order placing API slow
Right now our order placing system and matching system are separate.
Just looking for possible solution and opinion. Please let me know if something is unclear.

Powerful and cheap array algorithms

This is how the text messages are normally stored in Firebase realtime-Database
I am not fond of the idea that every time someone joins a group chat, they would need to download the entire e.g 20000 history text messages. Naturally, users wouldn't swipe all the way up to the very first message. However, in firebase realtime database, storing all messages under a given parent node will cause all messages to be downloaded once a user queries it (to join the group chat).
One possible efficiency solution:
Adding a second parent node that stores older text messages. E.g latest 500 text messages are saved under the main messages parent node and the rest of 19500 old text messages are saved on a different parent node. However, the parent node for old text messages will also need to be updated with newer old text messages. We would then need to download all 19500 old text messages as a consequence.
Perhaps the ideal case is to create up to N parent nodes that store packets of 300 text messages each. However, what consequence would their be with excessively creating parent nodes?
What efficiency solutions are recommended with a problem like this? Is there some technique I am forgetting or unaware of?
Just sort the list by date desc and limit to the last N messages. Or save the "inverse" date and sort on that. You can read more about it here.

Can I use Time as globally unique event version?

I found time as the best value as event version.
I can merge perfectly independent events of different event sources on different servers whenever needed without being worry about read side event order synchronization. I know which event (from server 1) had happened before the other (from server 2) without the need for global sequential event id generator which makes all read sides to depend on it.
As long as the time is a globally ever sequential event version , different teams in companies can act as distributed event sources or event readers And everyone can always relay on the contract.
The world's simplest notification from a write side to subscribed read sides followed by a query pulling the recent changes from the underlying write side can simplify everything.
Are there any side effects I'm not aware of ?
Time is indeed increasing and you get a deterministic number, however event versioning is not only serves the purpose of preventing conflicts. We always say that when we commit a new event to the event store, we send the new event version there as well and it must match the expected version on the event store side, which must be the previous version plus exactly one. If there will be a thousand or three millions of ticks between two events - I do not really care, this does not give me the information I need. And if I have missed one event on the go is critical to know. So I would not use anything else than incremental counter, with events versioned per aggregate/stream.

Why is my CallFire phone number not available after I've placed an order via API

I have a scala client that talk to the CallFire API. I can't find anything in the documentation about having a phone number be immediately available (accept phone calls) after placing an order from the API. Here is the specific line I use: https://github.com/oGLOWo/callfire-scala-client/blob/master/src/main/scala/com/oglowo/callfire/Client.scala#L166
I need these numbers to be available when my customers purchase them. Are there any parameters that I don't know about or something that I'm doing wrong that is causing the numbers to not pick up for several minutes?
Number purchases can take several minutes to fulfil as the order is processed by the upstream number provider, which can vary according to the region and number type. As such, this is necessarily an asynchronous process.
My suggestion would be that after you create the number order, each time that it is necessary to know the status of the number you purchased, you can invoke the GetNumber operation to get status information for that number.
The most relevant field for your purposes would be the "Status" field, which indicates where in the number fulfilment process that number is. Once the status has transitioned to "Active", your number should be fully available.
Additionally, you can look at the "CallFeature" and "TextFeature" fields, in the NumberConfiguraton section of the Number resource, to see whether the number has confirmed call or text service yet, respectively.
Alternatively, you can also invoke the GetNumberOrder operation to get the status of your order. This will give you information on the status of the number order itself, but in my opinion is less useful for your purposes than querying the number status directly.
It is also worth mentioning that there are cases where the number is technically being serviced, but CallFire's number inventory hasn't yet been updated to indicate this. This can be pushed along by creating inbound traffic to the number on each of the features. That is, you might have a number which "activates" the numbers you purchase more rapidly by sending a call or text to them. This is due to the slight delay between the number being configured upstream, and CallFire's systems being notified of that fact. By sending traffic to the number, you more rapidly give CallFire's systems feedback that the number is enabled. This can save you up to a couple of minutes, if time is of the essence.
Your question has prompted me to create a feature request for CallFire internally, to add an event type to CreateSubscription for when number orders transition between statuses. This way, you could avoid having to poll for number/order statuses repeatedly, and instead we would notify your server by HTTP POST when the number order transitions to finished.

Last Updated Date: Antipattern?

I keep seeing questions floating through that make reference to a column in a database table named something like DateLastUpdated. I don't get it.
The only companion field I've ever seen is LastUpdateUserId or such. There's never an indicator about why the update took place; or even what the update was.
On top of that, this field is sometimes written from within a trigger, where even less context is available.
It certainly doesn't even come close to being an audit trail; so that can't be the justification. And if there is and audit trail somewhere in a log or whatever, this field would be redundant.
What am I missing? Why is this pattern so popular?
Such a field can be used to detect whether there are conflicting edits made by different processes. When you retrieve a record from the database, you get the previous DateLastUpdated field. After making changes to other fields, you submit the record back to the database layer. The database layer checks that the DateLastUpdated you submit matches the one still in the database. If it matches, then the update is performed (and DateLastUpdated is updated to the current time). However, if it does not match, then some other process has changed the record in the meantime and the current update can be aborted.
It depends on the exact circumstance, but a timestamp like that can be very useful for autogenerated data - you can figure out if something needs to be recalculated if a depedency has changed later on (this is how build systems calculate which files need to be recompiled).
Also, many websites will have data marking "Last changed" on a page, particularly news sites that may edit content. The exact reason isn't necessary (and there likely exist backups in case an audit trail is really necessary), but this data needs to be visible to the end user.
These sorts of things are typically used for business applications where user action is required to initiate the update. Typically, there will be some kind of business app (eg a CRM desktop application) and for most updates there tends to be only one way of making the update.
If you're looking at address data, that was done through the "Maintain Address" screen, etc.
Such database auditing is there to augment business-level auditing, not to replace it. Call centres will sometimes (or always in the case of financial services providers in Australia, as one example) record phone calls. That's part of the audit trail too but doesn't tend to be part of the IT solution as far as the desktop application (and related infrastructure) goes, although that is by no means a hard and fast rule.
Call centre staff will also typically have some sort of "Notes" or "Log" functionality where they can type freeform text as to why the customer called and what action was taken so the next operator can pick up where they left off when the customer rings back.
Triggers will often be used to record exactly what was changed (eg writing the old record to an audit table). The purpose of all this is that with all the information (the notes, recorded call, database audit trail and logs) the previous state of the data can be reconstructed as can the resulting action. This may be to find/resolve bugs in the system or simply as a conflict resolution process with the customer.
It is certainly popular - rails for example has a shorthand for it, as well as a creation timestamp (:timestamps).
At the application level it's very useful, as the same pattern is very common in views - look at the questions here for example (answered 56 secs ago, etc).
It can also be used retrospectively in reporting to generate stats (e.g. what is the growth curve of the number of records in the DB).
there are a couple of scenarios
Let's say you have an address table for your customers
you have your CRM app, the customer calls that his address has changed a month ago, with the LastUpdate column you can see that this row for this customer hasn't been touched in 4 months
usually you use triggers to populate a history table so that you can see all the other history, if you see that the creationdate and updated date are the same there is no point hitting the history table since you won't find anything
you calculate indexes (stock market), you can easily see that it was recalculated just by looking at this column
there are 2 DB servers, by comparing the date column you can find out if all the changes have been replicated or not etc etc ect
This is also very useful if you have to send feeds out to clients that are delta feeds, that is only the records that have been changed or inserted since the data of the last feed are sent.