Why is my CallFire phone number not available after I've placed an order via API - scala

I have a scala client that talk to the CallFire API. I can't find anything in the documentation about having a phone number be immediately available (accept phone calls) after placing an order from the API. Here is the specific line I use: https://github.com/oGLOWo/callfire-scala-client/blob/master/src/main/scala/com/oglowo/callfire/Client.scala#L166
I need these numbers to be available when my customers purchase them. Are there any parameters that I don't know about or something that I'm doing wrong that is causing the numbers to not pick up for several minutes?

Number purchases can take several minutes to fulfil as the order is processed by the upstream number provider, which can vary according to the region and number type. As such, this is necessarily an asynchronous process.
My suggestion would be that after you create the number order, each time that it is necessary to know the status of the number you purchased, you can invoke the GetNumber operation to get status information for that number.
The most relevant field for your purposes would be the "Status" field, which indicates where in the number fulfilment process that number is. Once the status has transitioned to "Active", your number should be fully available.
Additionally, you can look at the "CallFeature" and "TextFeature" fields, in the NumberConfiguraton section of the Number resource, to see whether the number has confirmed call or text service yet, respectively.
Alternatively, you can also invoke the GetNumberOrder operation to get the status of your order. This will give you information on the status of the number order itself, but in my opinion is less useful for your purposes than querying the number status directly.
It is also worth mentioning that there are cases where the number is technically being serviced, but CallFire's number inventory hasn't yet been updated to indicate this. This can be pushed along by creating inbound traffic to the number on each of the features. That is, you might have a number which "activates" the numbers you purchase more rapidly by sending a call or text to them. This is due to the slight delay between the number being configured upstream, and CallFire's systems being notified of that fact. By sending traffic to the number, you more rapidly give CallFire's systems feedback that the number is enabled. This can save you up to a couple of minutes, if time is of the essence.
Your question has prompted me to create a feature request for CallFire internally, to add an event type to CreateSubscription for when number orders transition between statuses. This way, you could avoid having to poll for number/order statuses repeatedly, and instead we would notify your server by HTTP POST when the number order transitions to finished.

Related

How to communicate between order placing system and trade matching system

Order Placing System
A user places an order and the corresponding amount is kept on hold for the order and an order is created. This order is then pushed to some queue to be used by trade matching system. User gets back a reference order id for the order placed in return to the API call.
Trade Matching System
The system feeds on data from the queue generated by order placing system and looks for possible match and if possible to execute, executes them and push to another queue.
User Notification System
The system fetches data from the executed queue and broadcasts it to the user it belonged to. User can also fetch status of the order from the reference id which was shared on first API call
These two systems are right now communicating indirectly via a queue. Now the requirement is, in order placing system, when a user places order, along with order id, we also need to return execution status (i.e. Whether it got executed or not, if yes, rate and fee charged etc).
What should be mode of communication between order placing and trade matching system to make it possible to return execution details in first api call itself ?
Challenges
Matching System being single threaded, we cannot merge it with order engine
Polling and waiting for execution from execution queue, will probably make our order placing API slow
Right now our order placing system and matching system are separate.
Just looking for possible solution and opinion. Please let me know if something is unclear.

Send an Alert to user based on changes in the database row

There are 10, 000 users. Each can define up to 500 conditions for an enterprise supply chain inventory.
An example of a condition could be
Group1
Item in InventoryX > 5000 AND colourItem == Red
AND Group2
Item in InventoryY > 4000 and colourItem == Green
Whenever the state of the database (single row in InventoryX, InventoryY, and colourItem columns) meets to the condition mentioned above, the user who has created the alert should be notified.
The first solution that comes to mind is to continuously keep polling the database, at a given time interval (say 1 minute) but the problem with it would be every minute there will be 10000 X 500 polls.
This is difficult to scale.
We also need to keep in mind that the user's are given a simple front-end to create conditions, and they can update these conditions as per their whim. No hard coding can work.
What would be a better architecture/project to be used to achieve the same?
Database = PostgreSQL.
https://www.postgresql.org/docs/current/sql-notify.html
Appears difficult to implement since there is no ease of usage for so many conditions.
You only have three options:
Poll the database for changes (which as you say, gets expensive).
Check all the rules as the changes are made.
Make a note of changed data as changes are made and check that changed subset in batches.
Whether you prefer #2 or #3 depends on how many rules there are, how long it takes to check them for each changed row and whether you can usefully summarise changes or merge alerts.
Both #2 and #3 would use one or more triggers. Option #2 would run the rules and add entries to an "alerts" queueing table (or NOTIFY a listening process to send an alert, or set a flag in memcached/redis etc).
Option #3 would just note either the IDs of changed rows, or perhaps the details of the change and you would have another process read the changes and generate alerts. This gives you the opportunity to notice that the same change was made twice and only send 1 alert or similar if that is useful to you.

FIX order tracking

With respect to FIX 4.2 or greater:
Q1.a. How are incoming and outgoing sequence #’s correlated/linked? Is there a buyer specific FIX tag a buyer can embed/use explicitly for tracking upon submitting a buy order that is also included in subsequent incoming status message sequences from the broker?
Q1.b. If not, then how does a buyer manage/track individually several IOC buy orders that are submitted in quick succession or concurrently of securities which may or may not be identical, at different price levels, where units or shares are “filled” at varying rates?
Q1.a. How are incoming and outgoing sequence #’s correlated/linked?
They are not linked (i.e. they are independant). Any FIX application/engine (such as the QuickFIX family) maintains two sequence numbers per session, one for incoming and one for outgoing. See also this answer on Stack Overflow which pretty much tells you the same.
When using an engine like any of the QuickFIX family (QuickFIX, QuickFIX/J, QuickFIX/N), these will be managed for you and apart from some configuration vis-a-vis your counterparty you should not bother about managing these.
Q1.a. Is there a buyer specific FIX tag a buyer can embed/use explicitly for tracking upon submitting a buy order that is also included in subsequent incoming status message sequences from the broker?
These tags are already present in e.g. the FIX Order Single message (D) - ClOrdId:
Unique identifier for Order as assigned by the buy-side (institution, broker, intermediary etc.) [...]. Uniqueness must be guaranteed within a single trading day. Firms, particularly those which electronically submit multi-day orders, trade globally or throughout market close periods, should ensure uniqueness across days, for example by embedding a date within the ClOrdID field.
This field is mandatory when creating a new order using FIX Order Single, and is used the refer to the order in subsequent messaging (e.g. Execution Report, or Status messages).
Note that the ClOrdId changes when an order is changed using an Order Cancel/Replace Request <G>, i.e. you assign a new ClOrdId to the order when changing or canceling it.

Can I use Time as globally unique event version?

I found time as the best value as event version.
I can merge perfectly independent events of different event sources on different servers whenever needed without being worry about read side event order synchronization. I know which event (from server 1) had happened before the other (from server 2) without the need for global sequential event id generator which makes all read sides to depend on it.
As long as the time is a globally ever sequential event version , different teams in companies can act as distributed event sources or event readers And everyone can always relay on the contract.
The world's simplest notification from a write side to subscribed read sides followed by a query pulling the recent changes from the underlying write side can simplify everything.
Are there any side effects I'm not aware of ?
Time is indeed increasing and you get a deterministic number, however event versioning is not only serves the purpose of preventing conflicts. We always say that when we commit a new event to the event store, we send the new event version there as well and it must match the expected version on the event store side, which must be the previous version plus exactly one. If there will be a thousand or three millions of ticks between two events - I do not really care, this does not give me the information I need. And if I have missed one event on the go is critical to know. So I would not use anything else than incremental counter, with events versioned per aggregate/stream.

Last Updated Date: Antipattern?

I keep seeing questions floating through that make reference to a column in a database table named something like DateLastUpdated. I don't get it.
The only companion field I've ever seen is LastUpdateUserId or such. There's never an indicator about why the update took place; or even what the update was.
On top of that, this field is sometimes written from within a trigger, where even less context is available.
It certainly doesn't even come close to being an audit trail; so that can't be the justification. And if there is and audit trail somewhere in a log or whatever, this field would be redundant.
What am I missing? Why is this pattern so popular?
Such a field can be used to detect whether there are conflicting edits made by different processes. When you retrieve a record from the database, you get the previous DateLastUpdated field. After making changes to other fields, you submit the record back to the database layer. The database layer checks that the DateLastUpdated you submit matches the one still in the database. If it matches, then the update is performed (and DateLastUpdated is updated to the current time). However, if it does not match, then some other process has changed the record in the meantime and the current update can be aborted.
It depends on the exact circumstance, but a timestamp like that can be very useful for autogenerated data - you can figure out if something needs to be recalculated if a depedency has changed later on (this is how build systems calculate which files need to be recompiled).
Also, many websites will have data marking "Last changed" on a page, particularly news sites that may edit content. The exact reason isn't necessary (and there likely exist backups in case an audit trail is really necessary), but this data needs to be visible to the end user.
These sorts of things are typically used for business applications where user action is required to initiate the update. Typically, there will be some kind of business app (eg a CRM desktop application) and for most updates there tends to be only one way of making the update.
If you're looking at address data, that was done through the "Maintain Address" screen, etc.
Such database auditing is there to augment business-level auditing, not to replace it. Call centres will sometimes (or always in the case of financial services providers in Australia, as one example) record phone calls. That's part of the audit trail too but doesn't tend to be part of the IT solution as far as the desktop application (and related infrastructure) goes, although that is by no means a hard and fast rule.
Call centre staff will also typically have some sort of "Notes" or "Log" functionality where they can type freeform text as to why the customer called and what action was taken so the next operator can pick up where they left off when the customer rings back.
Triggers will often be used to record exactly what was changed (eg writing the old record to an audit table). The purpose of all this is that with all the information (the notes, recorded call, database audit trail and logs) the previous state of the data can be reconstructed as can the resulting action. This may be to find/resolve bugs in the system or simply as a conflict resolution process with the customer.
It is certainly popular - rails for example has a shorthand for it, as well as a creation timestamp (:timestamps).
At the application level it's very useful, as the same pattern is very common in views - look at the questions here for example (answered 56 secs ago, etc).
It can also be used retrospectively in reporting to generate stats (e.g. what is the growth curve of the number of records in the DB).
there are a couple of scenarios
Let's say you have an address table for your customers
you have your CRM app, the customer calls that his address has changed a month ago, with the LastUpdate column you can see that this row for this customer hasn't been touched in 4 months
usually you use triggers to populate a history table so that you can see all the other history, if you see that the creationdate and updated date are the same there is no point hitting the history table since you won't find anything
you calculate indexes (stock market), you can easily see that it was recalculated just by looking at this column
there are 2 DB servers, by comparing the date column you can find out if all the changes have been replicated or not etc etc ect
This is also very useful if you have to send feeds out to clients that are delta feeds, that is only the records that have been changed or inserted since the data of the last feed are sent.