Is there a way to log (in a file or database table) the messages sent to and/or received from a Service Broker service without altering the activation procedure or message sending code?
I know I can alter the activation procedure to write the received message to a table before (or after) processing it. And I can change the sending stored procedure to do the same but I am looking for something that:
I won't have to build the infrastructure on my own and code and debug from scratch.
Has been extensively tested and known to not interfere with normal processing (transparent).
I can enable or disable at will.
Note: I need to log the actual message content and associated metadata.
TIA
Short answer: No.
You can monitor the Broker:Conversation Event Class via a server trace or as a event notification but will only give you metadata, not the message content.
You can enable queue retention but the retained messages are deleted anyway when the conversation ends and is problematic to turn on/off.
The best option, by far, is to do it from the activated stored procedure itself.
Related
I'm following the axon-springboot example shared by Allard (https://github.com/abuijze/bootiful-axon).
My understanding so far is: (please correct me if I have misunderstood some of the concepts)
Events are raised and stored in the event store/event bus (Mysql) (using EmbeddedEventStore). Now, event processors (TrackingProcessors - in my case) will pull events from the source (MySql - right?) and event handlers will execute the business logic and update the query storage and message published to RabbitMQ.
First question is where, when and who publishes this message to the RabbitMQ (used by statistics application which has the message listener configured.)
I have configured the TrackingProcessor to try the replay functionality. To execute the replay I stop my processor, delete the token entry for the processor, start the processor and events are replayed and my Query Storage is up-to-date as expected.
Second question is, when the replay is triggered and Query Storage is updated, I don't see any messages being published to the RabbitMQ...so my statistics application is out of sync. Am I doing something wrong?
Can you please advise?
Thanks
Singh
First of all, a correction: it is not the Tracking Processor or the updater of the view model that sends the messages to RabbitMQ. The Events are forwarded to Rabbit as they are published to the Event Bus.
The answer to your first question: messages are published by the SpringAmqpPublisher, which connects directly to the Event Bus, and forwards any published message to RabbitMQ as they are published.
To answer your second question, let's clarify how replays work, first. While it's called a "replay", essentially it's more a "reset". The Tracking Processor uses a TrackingToken to remember its progress of processing the Event Store. When the token is deleted (or just not yet available), the Tracking Processor starts processing from the beginning of the Event Store.
You never reply an entire application, just a single (Tracking) Processor. Just imagine: you re-publish all messages to RabbitMQ again, other components are triggered again, unaware of the fact that these are "old" messages, and user-confirmation emails are sent again, orders placed again, etc. etc.
If your Statistics are out of date, it's because they aren't part of the same processor and aren't rebuilt together with the other element. RabbitMQ doesn't support "replaying", since it doesn't remember the messages after delivering them.
Any model that you want to be able to rebuild, should be managed by a Tracking Processor.
Check out the Axon Reference guide for more information: https://docs.axonframework.org/part3/event-processing.html#event-processors
A Quickfix client validates incoming messages using XML spec files. If a message fails validation, quickfix automatically sends a rejection response. AFAIK in this case quickfix does not call the standard callback for incoming messages fromApp(), so up till now I was unable to programatically capture these erroneous incoming messages and handle them.
Is there a way to capture incoming FIX messages which fail quickfix validation?
Of course they may appear in the default quickfix log files, but I would rather capture them in my code in realtime.
There is not.
QuickFIX simply does not consider this a useful feature. If a message is invalid, QF performs the protocol-specified behavior and there is nothing that the application could or should do to recover. Any fix will require developer analysis and xml and/or code fixes, thus log files are sufficient to record the issue.
If you would like an automated alert when such errors occur, I suggest perhaps some kind of external log monitoring app that could watch your logs for occurrences of 35=3 or 35=j. (On the cheap side, a composition of cron/grep actions could do this very easily.)
Validation via XML spec file is in session level processing.
So, there is not suitable hook for this.
On the other hand, there are some configuration parameters;
UseDataDictionary : eliminates validation
ValidateUserDefinedFields : eliminates user defined field's validation
look for detailed descriptions
edit:
If your real problem is monitoring rejections, capturing Reject(3) and BusinessReject(j) messages at toAdmin() hook is sufficient.
What is the purpose of the message store structure in quickfix? I understand that you can log all incoming and outgoing fix messages via the message store interface and quickfix provides multiple implementations like file store etc.
My question is why do you even care about the message store other than logging your fix messages for the record?
You are confusing the MessageStore and the Log, which are two different things.
The MessageStore is for internal engine use. It tracks the current incoming and outgoing message sequence numbers, session start time, and other stuff. If your app goes down for whatever reason, when it restarts, it uses the MessageStore to resume where it left off with regards to sequence number and whether to reset the session.
The Log, however, is just a log. The engine doesn't really care about it. It's for the developers.
I am writing an acceptor application and using a persistent FIX session. I am trying to write a recovery mode, such that if I go offline or my program restarts, when I reconnect I want to reprocess all the messages sent to me during the day to get back to the current state.
To do this, when I start up I send a resend request for all messages to the server. They fire me back all the relevant messages, and they are marked possdupflag=Y and possresend=Y. Before each message, they send a sequence reset for the repeated message they are about to send.
The problem is though, these messages do not seem to be processed by my message cracker. Both fromAdmin and fromApp do not get these messages. I assume they are being ignored because of the dup flag and/or resend. So is there a way for me to tell QuickFIX that I want to see these messages?
On that note- if anyone has any recommendations on better recovery processes I would be open to them.
Thanks.
There's at least a couple of potential problems with this recovery strategy. The first is that it's not very friendly to your trading counterparty. If you only receive a small number of messages during your session then it may not be an issue, but if you receive hundreds of thousands of messages then your counterparty might complain about the massive resends.
The other issue is that message resend is intended for error recovery and is managed by the session protocol layer. In QuickFIX/J (and other FIX engines) the session maintains recovery state in addition to sending the ResendRequest automatically when it detects a sequence number gap. Your approach might work if you reset the next expected incoming sequence number to 1. When the session receives the next message with a higher sequence number it will detect the gap and request the missing messages. If the messages are validated, they will be forwarded to application layer with the PossDup flag set. If you send the ResendRequest message yourself the behavior is undefined since the session state will not have been set up properly.
I recommend using a MessageLog implementation to store your incoming messages in a form you can use for recovery when your application starts. You can look at the implementation of the existing message logs (FileLog, JdbcLog) to get some ideas.
The behaviour occurs because the engine's persistance system tells it that the recieved messages are resent messages and so (per the FIX protocol specification) are discarded. Here we save FIXml strings into our database to provide a similar recovery ability to that which you describe(they are also written to xml files on disk for other reasons). I don't believe that there is any way to tell quickfix that you want to see duplicate messages but it is probably better to use a different form or persistance to save on connection overheads. Quickfix does provide a way of outputting messages to file as they come in if that helps.
I too have the same issue and What Frank Says is absolutely correct ,
Just use the below method to set the target sequence number to the begin seq number of the desired resend req .
getSession()->setNextTargetMsgSeqNum(atoi(seq.c_str()));
The engine internally identifies that the target number is way too large and automatically sends resend request , and all messages will be captured in onMessage call back itself as usual
I have a situation where I'd like to keep a history or log of all MSMQ messages which have been processed (at least for a period of time). I realize that I can look at the current Queues using Computer Management -> Services and Applications -> Message Queuing. But what I'd like is a history or log of the messages which have already been processed.
I have so far been unable to find a non-programmatic way to do this. Ideally, it's a simple as setting an MSMQ property so that all messages get logged to either a file or even the windows log.
Does anyone know if this (or something similar) is possible?
You can do this with target journaling. This is assuming you want to store the message on the receiving machine? From MSDN:
Target journaling is the process of storing a copy of incoming messages. It is configured on a queue basis. When target journaling is enabled, a copy of each incoming message is placed in the target-journal queue when the message is removed (read) from the target queue. A target-journal queue (Journal) is created for each queue when the queue is created. MSMQ Explorer displays target-journal queues under each public queue.