What is the purpose of quickfix message store? - quickfix

What is the purpose of the message store structure in quickfix? I understand that you can log all incoming and outgoing fix messages via the message store interface and quickfix provides multiple implementations like file store etc.
My question is why do you even care about the message store other than logging your fix messages for the record?

You are confusing the MessageStore and the Log, which are two different things.
The MessageStore is for internal engine use. It tracks the current incoming and outgoing message sequence numbers, session start time, and other stuff. If your app goes down for whatever reason, when it restarts, it uses the MessageStore to resume where it left off with regards to sequence number and whether to reset the session.
The Log, however, is just a log. The engine doesn't really care about it. It's for the developers.

Related

Google PubSub with pull subscriber design flaw?

We are using googles steaming pull subscriber the design is as follows
We are doing
sending file from FE (frontend) to BE (backend)
BE converting that file to ByteArray and publishing to pubsub topic as message (so ByteArray going as message)
Topic sending that message to subscriber, subscriber converting the ByteArray to file again
that converted file subscriber sending to that tool
tool doing some cool stuff with file and notify the status to subscriber
that status going to BE and BE update the DB and sending that status to FE
Now in our subscriber when we receive message we are immediately acknowledge it and removing the listener of subscriber so that we don't get message any more
and when that tool done that stuff, it sending status to subscriber (we have express server running on subscriber) and
after receiving status we are re-creating listener of subscriber to receive message
Note
that tool may take 1hr or more to do stuff
we are using ordering key to properly distribute message to VM's
this code is working fine but my question is
is there any flaw in this (bcz we r removing listener then again re creating it or anything like that)
or any better option or GCP services to best fit this design
or any improvement in code
EDIT :
Removed code sample
I would say that there are several parts of this design that are sub-optimal. First of all, acking a message before you have finished processing it means you risk message loss. What happens if your tool or subscriber crashes after acking the message, but before processing has completed? This means when the processes start back up, they will not receive the message again. Are you okay with requests from the frontend possibly never being processed? If not, you'll want to ack after processing is completed, or--given that your processing takes so long--persist the request to a database or to some storage and then acknowledge the message. If you are going to have to persist the file somewhere else anyway, you might want to consider taking Pub/Sub out of the picture and just writing the file to storage like GCS and then having your subscribers instead read out of GCS directly.
Secondly, stopping the subscriber upon each message being received is an anti-pattern. Your subscriber should be receiving and processing each message as it arrives. If you need to limit the number of messages being processed in parallel, use message flow control.
Also ordering keys isn't really a way to "properly distribute message to VM's." Ordering keys is only a means by which to ensure ordered delivery. There are no guarantees that the messages for the same ordering key will continually go to the same subscriber client. In fact, if you shut down the subscriber client after receiving each message, then another subscriber could receiving the next message for the ordering key since you've acked the earlier message. If all you mean by "properly distribute message" is that you want the messages delivered in order, then this is the correct way to use ordering keys.
You say you have a subscription per client, then whether or not that is the right thing to do depends on what you mean by "client." If client means "user of the front end," then I imagine you plan to have a different topic per user as well. If so, then you need to keep in mind the 10,000 topic-per-project limit. If you mean that each VM has its own subscription, then note that each VM is going to receive every message published to the topic. If you only want one VM to receive each message, then you need to use the same subscription across all VMs.
In general, also keep in mind that Cloud Pub/Sub has at-least-once delivery semantics. That means that even an acknowledged message could be redelivered, so you do need to be prepared to handle duplicate message delivery.

redis- Should I use redis to store chat messages?

So I am currently working on a chat, and I wonder if I could use Redis to store the chat messages. The messages will be only at the web and I want at least a chat history of 20 messages for each private chat. The Chats subscribers will be already stored in MongoDB.
I mainly want to use Redis, because I get rid of the MongoDB stuff, for more speed.
I already use Pub/Sub, but what about storing a copy in Redis Lists? Also what about reading statuses, how could I implement that?
Redis only loses data in case of power outage, if the system is shutdown properly, it will save its data and in this case, data won't be lost.
It is good approach to dump data from redis to mongoDb/anyotherDb when a size limit is reached or on date basis (weekly or monthly) so that your realtime chat database stays light weighted.
Many modern systems now a days prepare for power outage, a ups will run and the system will shutdown properly.
see : https://hackernoon.com/how-to-shutdown-your-servers-in-case-of-power-failure-ups-nut-co-34d22a08e92
Also what about reading statuses, how could I implement that?
Depends on protocol you are implementing, if you are using xmpp, see this.
Otherwise, you can use a property in message model for e.g "DeliveryStatus" and set it to your enums (1. Sent, 2. Delivered, 3. Read). Mark message as Sent as soon as it is received at server. For Delivered and Read, your clients will send you back packets indicating the respective action has occurred.
As pointed in the comment above, the important thing to consider here is the persistency model. Redis offers some persistency (with snapshots and aof-files). The important thing is to first understand what you need:
can you afford to lose all the data? can you afford to lose some of the data? if the answer is no, then perhaps you should not bother with redis.

Can I edit messages on mqtt server?

Building an instant chat app (native IOS and web). Exploring whether to use XMPP or MQTT as application protocols. Seemingly I can't have users editing old messages on XMPP. Can messages be edited on MQTT?
Example: I want to implement "Edit Message" like Slack offers, but upon clicking "(edited)" to allow the user to see the different versions of the message and their timestamps (like the edit history for comments you find in Facebook), enabling an "audit trail" of the conversation.
Follow-up: As it seems this can only be achieved through a "hack", would it be better to get the hack done on XMPP or MQTT or some other protocol/websockets/JSON, etc?
Once a MQTT message is published to the broker the publishing client has no more control over that message at all.
Most brokers will not allow you to edit the message either as they will just forward the message instantly to all clients subscribed to the relevant topics and queue the message for any offline clients with persistent subscriptions.
The only exception may be the mosca broker that has a call back for when messages reach the broker, but this would not allow a user to edit a message, only the system to possibly update the payload in the instant before it was forwarded to the subscribed clients.
Hardlib's advice is correct, editing messages in this way is not supported by most MQTT implementations and to implement it would break the loose coupling between publisher and subscriber that is the virtue of MQTT. In other words this should be implemented at a higher level or through other means.
That said, if I understand editing to mean the ability to change what the broker forwards to clients that were not online during the initial publication, you could implement this with retained messages. Consider this:
Client A is subscribed to topic clientb/# and Client B is subscribed to topic clienta/#.
Client A publishes a message to clienta/(unique message id) while Client B is not actively connected. The broker retains the message.
Client A decides to edit the message so (through some interface you devise) they publish an amended message to clienta/(unique message id) which replaces the message and, from a subscribers perspective, edits what is there.
Client B receives the amended message when they come online and (as long as there isn't a persistent session or something like that) has no knowledge of the change.
From this example you can probably tell why this is a bad idea as the server would retain every single message in a different topic and would likely need regular pruning... not mention that it would make a mess out of timestamps and require all sorts of other work arounds. However, if there is some reason that you have to implement it this way you could hack something usable together.

capturing incoming FIX messages which fail QuickFix validation

A Quickfix client validates incoming messages using XML spec files. If a message fails validation, quickfix automatically sends a rejection response. AFAIK in this case quickfix does not call the standard callback for incoming messages fromApp(), so up till now I was unable to programatically capture these erroneous incoming messages and handle them.
Is there a way to capture incoming FIX messages which fail quickfix validation?
Of course they may appear in the default quickfix log files, but I would rather capture them in my code in realtime.
There is not.
QuickFIX simply does not consider this a useful feature. If a message is invalid, QF performs the protocol-specified behavior and there is nothing that the application could or should do to recover. Any fix will require developer analysis and xml and/or code fixes, thus log files are sufficient to record the issue.
If you would like an automated alert when such errors occur, I suggest perhaps some kind of external log monitoring app that could watch your logs for occurrences of 35=3 or 35=j. (On the cheap side, a composition of cron/grep actions could do this very easily.)
Validation via XML spec file is in session level processing.
So, there is not suitable hook for this.
On the other hand, there are some configuration parameters;
UseDataDictionary : eliminates validation
ValidateUserDefinedFields : eliminates user defined field's validation
look for detailed descriptions
edit:
If your real problem is monitoring rejections, capturing Reject(3) and BusinessReject(j) messages at toAdmin() hook is sufficient.

How can I get QuickFix to process messages that come in from a resend request?

I am writing an acceptor application and using a persistent FIX session. I am trying to write a recovery mode, such that if I go offline or my program restarts, when I reconnect I want to reprocess all the messages sent to me during the day to get back to the current state.
To do this, when I start up I send a resend request for all messages to the server. They fire me back all the relevant messages, and they are marked possdupflag=Y and possresend=Y. Before each message, they send a sequence reset for the repeated message they are about to send.
The problem is though, these messages do not seem to be processed by my message cracker. Both fromAdmin and fromApp do not get these messages. I assume they are being ignored because of the dup flag and/or resend. So is there a way for me to tell QuickFIX that I want to see these messages?
On that note- if anyone has any recommendations on better recovery processes I would be open to them.
Thanks.
There's at least a couple of potential problems with this recovery strategy. The first is that it's not very friendly to your trading counterparty. If you only receive a small number of messages during your session then it may not be an issue, but if you receive hundreds of thousands of messages then your counterparty might complain about the massive resends.
The other issue is that message resend is intended for error recovery and is managed by the session protocol layer. In QuickFIX/J (and other FIX engines) the session maintains recovery state in addition to sending the ResendRequest automatically when it detects a sequence number gap. Your approach might work if you reset the next expected incoming sequence number to 1. When the session receives the next message with a higher sequence number it will detect the gap and request the missing messages. If the messages are validated, they will be forwarded to application layer with the PossDup flag set. If you send the ResendRequest message yourself the behavior is undefined since the session state will not have been set up properly.
I recommend using a MessageLog implementation to store your incoming messages in a form you can use for recovery when your application starts. You can look at the implementation of the existing message logs (FileLog, JdbcLog) to get some ideas.
The behaviour occurs because the engine's persistance system tells it that the recieved messages are resent messages and so (per the FIX protocol specification) are discarded. Here we save FIXml strings into our database to provide a similar recovery ability to that which you describe(they are also written to xml files on disk for other reasons). I don't believe that there is any way to tell quickfix that you want to see duplicate messages but it is probably better to use a different form or persistance to save on connection overheads. Quickfix does provide a way of outputting messages to file as they come in if that helps.
I too have the same issue and What Frank Says is absolutely correct ,
Just use the below method to set the target sequence number to the begin seq number of the desired resend req .
getSession()->setNextTargetMsgSeqNum(atoi(seq.c_str()));
The engine internally identifies that the target number is way too large and automatically sends resend request , and all messages will be captured in onMessage call back itself as usual