MSMQ messages disappear from outbound queue but never arrive in the inbound queue - msmq

I have a strange issue setting up an existing application on our new internal Cloud.
I have a simple messaging system that pushes a message from one server (Server1) onto a MSMQ on another server (Server2). The messages disappear off the outbound but never appear in the inbound queue.
When I take Server2 msmq off line the messages build up on Server1. Restarting Msmq on Server2 causes the messages in the outbound queue on Server1 to disappear - but the message still never arrives at Server2.
The details:
MSMQ is set up in Workgroup mode, as that's the virtual networks requirement.
Queues are private.
Permissions are set to allow certain users access.
Has anybody any ideas on why this is happening or how I could track down the issue.

It could be that the remote private queue is a transactional queue and you send the message as non-transactional or vice versa. If the transaction setting on the queue and the message does not match, the message will disappear!

I have seen this in the past with the direct format name where it was set to something like
DIRECT=OS:192.16.8.0.1\PRIVATE$\MyQueue
where I should have specified DIRECT=TCP:192.168.0.1\PRIVATE$\MyQueue
see:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms700996(v=vs.85).aspx
#John Breakwell had noted here http://blogs.msdn.com/b/johnbreakwell/archive/2010/01/22/why-does-msmq-keep-losing-my-messages.aspx:
Server name used to address message doesn't match destination machine
When MSMQ receives a message from over the wire, it always validates that this machine is the correct recipient. This is to ensure that something like a DNS misconfiguration does not result in messages being delivered to the wrong place. The messages are, instead, discarded unless the IgnoreOSNameValidation registry value is set appropriately. You may want to do this with an Internet-facing MSMQ server, for example, where the domain and server names visible to MSMQ clients on the Internet often bear no resemblance to the real ones (for good security reasons).

It sounds like a permissions or addressing issue.
Try to enable the event log under Applications and Services Logs -> Microsoft -> Windows -> MSMQ called End2End.
This log should tell you exactly what is going wrong with the delivery of messages to the expected destination queue.
Nope: For every successful delivery there should be three events raised in this log:
Message with ID blah came over the network (ie, message has arrived from a remote sender)
Message with ID blah was sent to queue blah (ie, message forwarded to local queue)
Message with ID blah was put into queue blah (ie, message arrives in local queue)
Assumes you are using Server 2008 and above.

You can add Negative Source Journaling to the sending application code to find out exactly what the root cause is. Most likely one of the two answers you have already received.

Are the messages arriving in the dead-letter queue on Server 2?

Related

How to check if Siebel has successfully delivered an email?

We send a lot of email messages from our Siebel 7.8 application, and we'd like to determine whether they have been successfully delivered or not.
According to the Bookshelf, if the SMTP server is down, the Communications Outbound Manager retries to send the message later, so that's not a problem. However, there are still plenty of issues which could cause an email to not be delivered, such as a typo in the address, the receiver having reached its storage quota, etc.
We send our messages this way:
var ps = TheApplication().NewPropertySet();
ps.SetProperty("ActivityId", outboundEmailActivityId);
ps.SetProperty("CommProfile", commProfile);
ps.SetProperty("ProcessMode", "Local");
var bs = TheApplication().GetService("Outbound Communications Manager");
bs.InvokeMethod("SendMessage", ps, psOut);
Using ProcessMode = Local allows us to detect a few errors. For example, if we try to send a message to a non-existant account in the same domain of our SMTP server, it returns 550 Unknown user and then 503 Must have sender and recipient first. The Outbound Communications Manager raises an exception, and we capture and handle it.
However, if we send a message to a non-existant account in a different domain, our SMTP server can't know that it will fail, and therefore it returns 250 Queued, and our code completes successfully. Later (it can range from seconds to a few hours later), we will receive a "Message undeliverable" error message, but at this point we only know that an outbound message failed, we don't know which one.
Is there any way in which Siebel can handle these 'Message undeliverable' notifications automatically?
We are thinking of writing our own process for that, but it seems like a huge task: we'd have to parse the delivery failure notification, identify the failing recipient, search for all the recent messages sent to that address, and somehow, guess which one failed (based on the Message-Id if we are lucky and can read it within Siebel, or on the Subject otherwise).
The problem is that SMTP is by its nature neither a synchronous nor reliable protocol (i.e. in the sense of "engineered for guaranteed delivery"). Your Siebel app server will connect to its assigned SMTP server and ask it to accept a message for delivery and at that time there are a few high level validations that can be perform (some of which you've mentioned but which can also include policy enforcement such as checking whether your (possibly anonymous) identity is authorized for relaying messages to external domains). Once that conversation ends, there is not much else you can reliably do because again, everything from that point is asynchronous and not guaranteed for delivery (any number of intermediate relay agents can be involved, each with their own potential for outages with or without retry, each with the ability to honor or ignore requests for delivery or read receipts or to report invalid recipients, throwing your message in a junk folder or not, etc.). Certainly you can attempt to work with any bounce notifications you do happen to get to try to correlate them back to the sender but that would be outside the context of your sending code.

xmpp messages are lost when client connection lost suddently

I am using ejabberd server and ios xmppframework.
there are two clients, A and B.
When A and B are online, A can send message to B successfully.
If B is offline, B can receive the message when B is online again.
But when B is suddenly/unexpectedly lost connection, such as manually close wi-fi, the message sent by A is lost. B will never
receive this message.
I guess the reason is that B lost connection suddenly and the server still think B is online. Thus the offline message does work under this condition.
So my question is how to ensure the message that sent by A will be received by B? To ensure there is no messages lost.
I've spent the last week trying to track down missing messages in my XMPPFramework and eJabberd messaging app. Here are the full steps I went through to guarantee message delivery and what the effects of each step are.
Mod_offline
In the ejabberd.yml config file ensure that you have this in the access rules:
max_user_offline_messages:
admin: 5000
all: 100
and this in the modules section:
mod_offline:
access_max_user_messages: max_user_offline_messages
When the server knows the recipient of a message is offline they will store it and deliver it when they re-connect.
Ping (XEP-199)
xmppPing = XMPPPing()
xmppPing.respondsToQueries = true
xmppPing.activate(xmppStream)
xmppAutoPing = XMPPAutoPing()
xmppAutoPing.pingInterval = 2 * 60
xmppAutoPing.pingTimeout = 10.0
xmppAutoPing.activate(xmppStream)
Ping acts like a heartbeat so the server knows when the user is offline but didn't disconnect normally. It's a good idea to not rely on this by disconnecting on applicationDidEnterBackground but when the client looses connectivity or the stream disconnects for unknown reasons there is a window of time where a client is offline but the server doesn't know it yet because the ping wasn't expected until sometime in the future. In this scenario the message isn't delivered and isn't stored for offline delivery.
Stream Management (XEP-198)
xmppStreamManagement = XMPPStreamManagement(storage: XMPPStreamManagementMemoryStorage(), dispatchQueue: dispatch_get_main_queue())
xmppStreamManagement.autoResume = true
xmppStreamManagement.addDelegate(self, delegateQueue: dispatch_get_main_queue())
xmppStreamManagement.activate(xmppStream)
and then in xmppStreamDidAuthenticate
xmppStreamManagement.enableStreamManagementWithResumption(true, maxTimeout: 100)
Nearly there. The final step is to go back to the ejabberd.yml and add this line to the listening ports section underneath access: c2s:
resend_on_timeout: true
Stream Management adds req/akn handshakes after each message delivery. On it's own it won't have any effect on the server side unless that resend_on_timeout is set (which it isn't by default on eJabberd).
There is a final edge case which needs to be considered when the acknowledgement of a received message doesn't get to the server and it decides to hold it for offline delivery. The next time the client logs in they are likely to get a duplicate message. To handle this we set that delegate for the XMPPStreamManager. Implement the xmppStreamManagement getIsHandled: and if the message has a chat body set the isHandledPtr to false. When you construct an outbound message add an xmppElement with a unique id:
let xmppMessage = XMPPMessage(type: "chat", to: partnerJID)
let xmppElement = DDXMLElement(name: "message")
xmppElement.addAttributeWithName("id", stringValue: xmppStream.generateUUID())
xmppElement.addAttributeWithName("type", stringValue: "chat")
xmppElement.addAttributeWithName("to", stringValue: partnerJID.bare())
xmppMessage.addBody(message)
xmppMessage.addChild(xmppElement)
xmppMessage.addReceiptRequest()
xmppStream.sendElement(xmppMessage)
Then when you receive a message, inform the stream manager that the message has been handled with xmppStreamManager.markHandledStanzaId(message.from().resource)
The purpose of this final step is to establish a unique identifier that you can add to the XMPPMessageArchivingCoreDataStorage and check for duplicates before displaying.
I guess the reason is that B lost connection suddenly and the server
still think B is online. Thus the offline message does work under this
condition
Yes you are absolutely correct,this is well known limitation of TCP connections.
There are two approaches to your problem
1 Server side
As I can see you are using ejabbed as XMPP server you can implement
mod_ping , Enabling this module will enables server side
heartbeat[ping] ,in case of broken connection to server[ejabbed] will
try to send heartbeat to connection and will detect connection is lost
between server and client. Use of this approach has one
drawback,module mod_ping has property called ping_interval which
states how often to send heartbeat to connected clients, here lower
limit is 32 seconds any value below 32 is ignored by ejabbed,means
you have 32 seconds black window in which messages can be lost if user
is sowing as online
2 Client side
From client side you can implement Message Delivery Receipts
mechanism .With each Chat message send a receipt to receiver user of
as soon as receiver user receives message send back this receipt
id. This way you can detect that your message is actually delivered to
receiver. If you don't receive such acknowledgement between certain
time interval you can show user as offline locally(on mobile
phone),store any further messages to this user as offline message
locally[in SQLLight database ],and wait for offline presence stanza for that user
,as soon as you receive offline presence stanza it means that server
has finally detected connection to that user is lost and makes user
status as offline ,now you can send all messages to that user ,which
will be again stored as offline messages on server.This is best
approach to avoid black-window.
Conclusion
You can either use Approach 2 and design you client such way ,you can also use Approach 1 along with approach 2 to minimize server broken connection detraction time.
If B goes offline suddenly then user A have to check if B is online/offline while sending message to user B. If user B is offline then user A have to upload that message on Server using Web service. And user B have to call web service on below function.
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
So user B will get that all offline message which was lost due to connection Lost.
At last, I use Ping together with Stream Management:
http://xmpp.org/extensions/xep-0198.html
This problem is solved.

hornetq guarantee that the message reached the queue

I am using org.hornetq.api.core.client
how can I guarantee the message that I am sending actually reached the queue (not the client, just the queue) ?
producer.send("validQueue",clientMessage)
please note that the queue is a valid queue .
this similar question is referring to invalid queue. other ones such as this one is relevant to the delivery to the client .
It really depends on how you are sending.
First question of yours was about
First of all, on JMS you have to way to send to an invalid queue since the producer will validate the queue's existence. On HornetQ core api you send to an address (not to a queue), and you may have an unbound queue. So you have to query if the address has queues or not.
Now, for confirmation the message was received:
Scenario I, persistent messages, non transactionally
Every message is sent blocked. The client will unblock as soon as the server acknowledged receiving the message. This is done automatically.. you don't have to do anything.
Scenario II, non persistent messages, non transactionally
There are no confirmations by default. The message is sent asynchronously. We assume the message is transient and it's not a big deal if you lost it. you can change that by setting block-on-non-persistent-send on the ServerLocator.
Scenario III, transactionally (either persistent or not).
As soon as you call commit the message is on the queues.
Scenario IV, Confirmation send
You set a callback and you get a method call as soon as the server acked it on the queues. Look on the manual for confirmation callback. There's also the same feature on JMS2.

MSMQ: incoming traffic, but messages don't show up in the queue

I'm transferring our web application to new infrastructure and I'm stuck at the MSMQ part.
1st screenshot: Server A sends messages to server B. I see the outgoing messages appear on server A.
2nd screenshot: Server B shows incoming traffic, but the messages don't appear in the queue.
The service picking up the messages at server B is not running!
Any ideas how to debug this situation?
The status of the outgoing queue is connected but the messages aren't moving. Likely to be that the acknowledgement messages are not being sent back successfully from server B. As server A never sees the acknowledgements, it is stuck in a permanent state of retrying to send awaiting a response. There should be an outgoing queue on server B pointing back to server A. Check its status. It is very likely that the IP address of the outgoing queue is incorrect.
If the messages are queuing in your outgoing queue on server A that means that they are definitely not being sent to the destination queue on server B.
If you have messages arriving on server B but not being delivered then this is probably due to queue permissions. However, based on your assertion that messages queue up on the outbound queue I can't see how server B can be receiving any messages.

Quickfix engine - does it persist messages before the start time on the server side

If a quick fix session is created by server(acceptor) at say 9AM, but the StartTime is at 11AM. This means the session exists but not active.
If the server receives an unsolicited message from an exchange that it needs to send on this session, will it persist this if I have configuration PersistMessages=Y and sends it to the client(initiator) when it connects after 11AM?
No, it would not persist messages received before start time and would send you a reject message. The message will be rejected at the interface itself, message isn't handled. You would have to resend it to get a response.
QuickFIX does persist (but not send) messages before a session is connected. The sequence numbers are updated and when the session is connected and the first message is sent, the counterparty FIX engine will see the gap in the sequence numbers and request a resend. QuickFIX will then resend the persisted messages. However, depending on your QuickFIX configuration, the outgoing messages might be considered to be too old and rejected locally.
As I understand, these are kept to take into account timings under which corresponding exchange would accept the orders.
Application or its sub-modules do not need to maintain timings and take some action on closing the fix session. Rather, QuickFix shall automatically deactivate the session.
Persistence of the message or re-sesnding when the session becomes active does not look desirable to me.
You can rather maintain some kind of queue to buffer such messages in sending application, and send them only when the time matches with active session timings.
That's my thoughts, hope that helps.