Quickfix engine - does it persist messages before the start time on the server side - quickfix

If a quick fix session is created by server(acceptor) at say 9AM, but the StartTime is at 11AM. This means the session exists but not active.
If the server receives an unsolicited message from an exchange that it needs to send on this session, will it persist this if I have configuration PersistMessages=Y and sends it to the client(initiator) when it connects after 11AM?

No, it would not persist messages received before start time and would send you a reject message. The message will be rejected at the interface itself, message isn't handled. You would have to resend it to get a response.

QuickFIX does persist (but not send) messages before a session is connected. The sequence numbers are updated and when the session is connected and the first message is sent, the counterparty FIX engine will see the gap in the sequence numbers and request a resend. QuickFIX will then resend the persisted messages. However, depending on your QuickFIX configuration, the outgoing messages might be considered to be too old and rejected locally.

As I understand, these are kept to take into account timings under which corresponding exchange would accept the orders.
Application or its sub-modules do not need to maintain timings and take some action on closing the fix session. Rather, QuickFix shall automatically deactivate the session.
Persistence of the message or re-sesnding when the session becomes active does not look desirable to me.
You can rather maintain some kind of queue to buffer such messages in sending application, and send them only when the time matches with active session timings.
That's my thoughts, hope that helps.

Related

xmpp messages are lost when client connection lost suddently

I am using ejabberd server and ios xmppframework.
there are two clients, A and B.
When A and B are online, A can send message to B successfully.
If B is offline, B can receive the message when B is online again.
But when B is suddenly/unexpectedly lost connection, such as manually close wi-fi, the message sent by A is lost. B will never
receive this message.
I guess the reason is that B lost connection suddenly and the server still think B is online. Thus the offline message does work under this condition.
So my question is how to ensure the message that sent by A will be received by B? To ensure there is no messages lost.
I've spent the last week trying to track down missing messages in my XMPPFramework and eJabberd messaging app. Here are the full steps I went through to guarantee message delivery and what the effects of each step are.
Mod_offline
In the ejabberd.yml config file ensure that you have this in the access rules:
max_user_offline_messages:
admin: 5000
all: 100
and this in the modules section:
mod_offline:
access_max_user_messages: max_user_offline_messages
When the server knows the recipient of a message is offline they will store it and deliver it when they re-connect.
Ping (XEP-199)
xmppPing = XMPPPing()
xmppPing.respondsToQueries = true
xmppPing.activate(xmppStream)
xmppAutoPing = XMPPAutoPing()
xmppAutoPing.pingInterval = 2 * 60
xmppAutoPing.pingTimeout = 10.0
xmppAutoPing.activate(xmppStream)
Ping acts like a heartbeat so the server knows when the user is offline but didn't disconnect normally. It's a good idea to not rely on this by disconnecting on applicationDidEnterBackground but when the client looses connectivity or the stream disconnects for unknown reasons there is a window of time where a client is offline but the server doesn't know it yet because the ping wasn't expected until sometime in the future. In this scenario the message isn't delivered and isn't stored for offline delivery.
Stream Management (XEP-198)
xmppStreamManagement = XMPPStreamManagement(storage: XMPPStreamManagementMemoryStorage(), dispatchQueue: dispatch_get_main_queue())
xmppStreamManagement.autoResume = true
xmppStreamManagement.addDelegate(self, delegateQueue: dispatch_get_main_queue())
xmppStreamManagement.activate(xmppStream)
and then in xmppStreamDidAuthenticate
xmppStreamManagement.enableStreamManagementWithResumption(true, maxTimeout: 100)
Nearly there. The final step is to go back to the ejabberd.yml and add this line to the listening ports section underneath access: c2s:
resend_on_timeout: true
Stream Management adds req/akn handshakes after each message delivery. On it's own it won't have any effect on the server side unless that resend_on_timeout is set (which it isn't by default on eJabberd).
There is a final edge case which needs to be considered when the acknowledgement of a received message doesn't get to the server and it decides to hold it for offline delivery. The next time the client logs in they are likely to get a duplicate message. To handle this we set that delegate for the XMPPStreamManager. Implement the xmppStreamManagement getIsHandled: and if the message has a chat body set the isHandledPtr to false. When you construct an outbound message add an xmppElement with a unique id:
let xmppMessage = XMPPMessage(type: "chat", to: partnerJID)
let xmppElement = DDXMLElement(name: "message")
xmppElement.addAttributeWithName("id", stringValue: xmppStream.generateUUID())
xmppElement.addAttributeWithName("type", stringValue: "chat")
xmppElement.addAttributeWithName("to", stringValue: partnerJID.bare())
xmppMessage.addBody(message)
xmppMessage.addChild(xmppElement)
xmppMessage.addReceiptRequest()
xmppStream.sendElement(xmppMessage)
Then when you receive a message, inform the stream manager that the message has been handled with xmppStreamManager.markHandledStanzaId(message.from().resource)
The purpose of this final step is to establish a unique identifier that you can add to the XMPPMessageArchivingCoreDataStorage and check for duplicates before displaying.
I guess the reason is that B lost connection suddenly and the server
still think B is online. Thus the offline message does work under this
condition
Yes you are absolutely correct,this is well known limitation of TCP connections.
There are two approaches to your problem
1 Server side
As I can see you are using ejabbed as XMPP server you can implement
mod_ping , Enabling this module will enables server side
heartbeat[ping] ,in case of broken connection to server[ejabbed] will
try to send heartbeat to connection and will detect connection is lost
between server and client. Use of this approach has one
drawback,module mod_ping has property called ping_interval which
states how often to send heartbeat to connected clients, here lower
limit is 32 seconds any value below 32 is ignored by ejabbed,means
you have 32 seconds black window in which messages can be lost if user
is sowing as online
2 Client side
From client side you can implement Message Delivery Receipts
mechanism .With each Chat message send a receipt to receiver user of
as soon as receiver user receives message send back this receipt
id. This way you can detect that your message is actually delivered to
receiver. If you don't receive such acknowledgement between certain
time interval you can show user as offline locally(on mobile
phone),store any further messages to this user as offline message
locally[in SQLLight database ],and wait for offline presence stanza for that user
,as soon as you receive offline presence stanza it means that server
has finally detected connection to that user is lost and makes user
status as offline ,now you can send all messages to that user ,which
will be again stored as offline messages on server.This is best
approach to avoid black-window.
Conclusion
You can either use Approach 2 and design you client such way ,you can also use Approach 1 along with approach 2 to minimize server broken connection detraction time.
If B goes offline suddenly then user A have to check if B is online/offline while sending message to user B. If user B is offline then user A have to upload that message on Server using Web service. And user B have to call web service on below function.
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
So user B will get that all offline message which was lost due to connection Lost.
At last, I use Ping together with Stream Management:
http://xmpp.org/extensions/xep-0198.html
This problem is solved.

Using FIX protocol to connect to the broker

I am currently trying to build a program using FIX protocol to communicate with a broker (currenex). I sent my (self-generated) logon message to the server and got something back.
This is what I sent
8=FIX.4.49=8835=A49=xxxxxxxx56=CNX34=152=20140718-11:40:18.24498=0108=30141=Y554=xxxxxx10=128
(the SenderCompID and the password were replaced)
and I got
8=FIX.4.49=7635=A34=149=CNX52=20140718-11:40:33.22456=xxxxxxxx141=Y98=0108=3010=1458=FIX.4.49=7035=h34=249=CNX52=20140718-11:40:33.22656=xxxxxxxx336=0340=210=128
back from the server.
I think I built the logon message correct (or did I?). But when I sent a second request MarketDataRequest
8=FIX.4.49=13735=V49=xxxxxxxx56=CNX34=252=20140718-11:42:53.504262=363263=1264=0265=1266=N267=2269=1269=0146=155=GBP/USD554=xxxx10=013
I had no response at all. I asked the broker and they said the connection dropped right away every time after I logged in.
I thought it was some connection problem and I tried using RESTClient (Postman) to send the message but the result was the same.
Could any one take a look at my messages and point out if there is something stupid please?
All I need is the real-time exchange rate so a simple FIX message example will be very helpful. Thanks a lot!
Regards,
Bo
Your logon response message says that your trading session is open (340=2) so it's not broker-side problem. I think your program disconnects TCP/IP connection from server after login message. FIX protocol insists that TCP/IP connection must be kept alive during the whole FIX session - otherwise the session will be closed. So you need rewrite your program to keep connection alive and just send there yor requests and listen for responses. Don't close the connection.
Try using Minifix tool which will maintain the heartbeats and the session connection.
Ideally for a 35=V request you should get
35=W = Market Data-Snapshot/Full Refresh
35=X = Market Data-Incremental Refresh
35=Y = Market Data Request Reject
you get a 35=3 (reject) or 35=4 (or a seq reset) in response to a 35=A request.

hornetq guarantee that the message reached the queue

I am using org.hornetq.api.core.client
how can I guarantee the message that I am sending actually reached the queue (not the client, just the queue) ?
producer.send("validQueue",clientMessage)
please note that the queue is a valid queue .
this similar question is referring to invalid queue. other ones such as this one is relevant to the delivery to the client .
It really depends on how you are sending.
First question of yours was about
First of all, on JMS you have to way to send to an invalid queue since the producer will validate the queue's existence. On HornetQ core api you send to an address (not to a queue), and you may have an unbound queue. So you have to query if the address has queues or not.
Now, for confirmation the message was received:
Scenario I, persistent messages, non transactionally
Every message is sent blocked. The client will unblock as soon as the server acknowledged receiving the message. This is done automatically.. you don't have to do anything.
Scenario II, non persistent messages, non transactionally
There are no confirmations by default. The message is sent asynchronously. We assume the message is transient and it's not a big deal if you lost it. you can change that by setting block-on-non-persistent-send on the ServerLocator.
Scenario III, transactionally (either persistent or not).
As soon as you call commit the message is on the queues.
Scenario IV, Confirmation send
You set a callback and you get a method call as soon as the server acked it on the queues. Look on the manual for confirmation callback. There's also the same feature on JMS2.

MSMQ messages disappear from outbound queue but never arrive in the inbound queue

I have a strange issue setting up an existing application on our new internal Cloud.
I have a simple messaging system that pushes a message from one server (Server1) onto a MSMQ on another server (Server2). The messages disappear off the outbound but never appear in the inbound queue.
When I take Server2 msmq off line the messages build up on Server1. Restarting Msmq on Server2 causes the messages in the outbound queue on Server1 to disappear - but the message still never arrives at Server2.
The details:
MSMQ is set up in Workgroup mode, as that's the virtual networks requirement.
Queues are private.
Permissions are set to allow certain users access.
Has anybody any ideas on why this is happening or how I could track down the issue.
It could be that the remote private queue is a transactional queue and you send the message as non-transactional or vice versa. If the transaction setting on the queue and the message does not match, the message will disappear!
I have seen this in the past with the direct format name where it was set to something like
DIRECT=OS:192.16.8.0.1\PRIVATE$\MyQueue
where I should have specified DIRECT=TCP:192.168.0.1\PRIVATE$\MyQueue
see:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms700996(v=vs.85).aspx
#John Breakwell had noted here http://blogs.msdn.com/b/johnbreakwell/archive/2010/01/22/why-does-msmq-keep-losing-my-messages.aspx:
Server name used to address message doesn't match destination machine
When MSMQ receives a message from over the wire, it always validates that this machine is the correct recipient. This is to ensure that something like a DNS misconfiguration does not result in messages being delivered to the wrong place. The messages are, instead, discarded unless the IgnoreOSNameValidation registry value is set appropriately. You may want to do this with an Internet-facing MSMQ server, for example, where the domain and server names visible to MSMQ clients on the Internet often bear no resemblance to the real ones (for good security reasons).
It sounds like a permissions or addressing issue.
Try to enable the event log under Applications and Services Logs -> Microsoft -> Windows -> MSMQ called End2End.
This log should tell you exactly what is going wrong with the delivery of messages to the expected destination queue.
Nope: For every successful delivery there should be three events raised in this log:
Message with ID blah came over the network (ie, message has arrived from a remote sender)
Message with ID blah was sent to queue blah (ie, message forwarded to local queue)
Message with ID blah was put into queue blah (ie, message arrives in local queue)
Assumes you are using Server 2008 and above.
You can add Negative Source Journaling to the sending application code to find out exactly what the root cause is. Most likely one of the two answers you have already received.
Are the messages arriving in the dead-letter queue on Server 2?

What is better practice for error notification by email

This question is language independent.
I have an application that handles requests in a loop. During this loop for each request multiple actions are taken. These actions are sitting inside try / catch / log blocks.
I am now extending this to notify administrators of severe errors via email.
This is all very easy, except for one thing. We are relying on the client/s to implement their own email delivery redundancy, and I know from experience there will always be one client who just has one SMTP exchange server, and this is bound to go down from time to time.
So here is the dilemma:
Scenario 1 (don't handle the error during failed send) - when I send an email to admin and SMTP is down it will break the app (app will stop running, and additional loops will stop processing, because the error is unhandled) This means that the error reporting which was supposed to be beneficial to the app suddenly becomes the reason why 99/100 requests don't get processed because there was an issue with request 1.
Scenario 2 (handle the exception during failed send) - this means that I surround send code in try/catch/log blocks, great! the application processes all requests 99 of them, except one, but the admin now has no notification of this one error via email because when it tried to send SMTP was down, and that error was simply logged to the application log, the admin who doesn't check this log for days (even weeks) at a time now has no way to know that error took place.
So is there a win/win way to solve this problem or am I always going to be at a loss, and in the mercy of SMTP being up. Remember it is out of our scope to manage email server redundancy.
Extend scenario 2 to keep a record of which entries in the application log didn't get sent via email, and periodically poll this log for unsent entries and try to resend them - eventually the smtp service will be available again. (You might want to stop any resent errors from going back in the resend queue tho...)
I would suggest the "win-win" way would be to have a server admin who actually administrates the server, rather than one who is entirely unreachable when his mail server is down, and doesn't bother to check up on it afterwards to see if he missed any notifications.