Msmq disaster recovery - msmq

I trying to use synchronisation tool (double take) to synchronize the MSMQ storage folder "C:\Windows\System32\msmq\storage"
from one server to another one
The problem that once the files moved to the second server, the Message queue service couldn’t be started
I found that if I exclude the *.MQ files the synchronization work fine but in this case I will be losing the transactional messages
Anybody have a solution to keep the transactinal messages ?
Thank you

MSMQ uses multiple files in the storage directory for transactional messages. Any attempt to copy the storage directory while MSMQ is working on transactional messages is likely to result in files that are not in synch with each other. Only guaranteed way to do this is to stop the MSMQ service first. This is how MQBKUP.EXE works, for example.
Cheers
John

Related

MSMQ - are messages deleted from the mq files once read?

Is it possible to read a queue message from a persistent mq file (e.g. p000001.mq) that has been processed and deleted, or is the message removed straight away?
The mq files haven't shrunk when deleting messages, but I don't appear to be able to open them in QueueExplorer.
"Is it possible to read a queue message from a persistent mq file that has been processed and deleted."
No. If you open the file in notepad then you should be able to see that the message data is still there but a flag will have been set so that MSMQ knows to make the message invisible.
MQ files do not shrink immediately as that impacts disk I/O performance.
MSMQ performs file cleanup at two points:
Service startup
After the MessageCleanupInterval (default 6 hours).

Ingesting a log file into HDFS using Flume while it is being written

What is the best way to ingest a log file into HDFS while it is being written ? I am trying to configure Apache Flume, and am trying to configure sources that can offer me data reliability as well. I was trying to configure "exec" and later also looked at "spooldir" but the following documentation at flume.apache.org has put doubt on my own intent -
Exec Source:
One of the most commonly requested features is the use case like-
"tail -F file_name" where an application writes to a log file on disk and
Flume tails the file, sending each line as an event. While this is
possible, there’s an obvious problem; what happens if the channel
fills up and Flume can’t send an event? Flume has no way of indicating
to the application writing the log file, that it needs to retain the
log or that the event hasn’t been sent for some reason. Your
application can never guarantee data has been received when using a
unidirectional asynchronous interface such as ExecSource!
Spooling Directory Source:
Unlike the Exec source, "spooldir" source is reliable and will not
miss data, even if Flume is restarted or killed. In exchange for this
reliability, only immutable files must be dropped into the spooling
directory. If a file is written to after being placed into the
spooling directory, Flume will print an error to its log file and stop
processing.
Anything better is available that I can use to ensure Flume will not miss any event and also reads in realtime ?
I would recommend using the Spooling Directory Source, because of its reliability. A workaround for the inmmutability requirement is to compose the files in a second directory, and once they reach certain size (in terms of bytes or amount of logs), move them to the spooling directory.

How does hornetq persist messages?

We are in the process of planning our server machine switch. While we are doing the switch, we need to be able to continue to receive traffic and save the JMS messages that are generated.
Is it possible to move the persisted message queue from one JBoss 7.1.1/HornetQ to another?
HornetQ uses a set of binary journal files to store the messages in the queues.
You can use export journal / export data... or you can use bridges to transfer data.
You should find some relevant information at the documentation on hornetq.org

NServiceBus and remote input queues with an MSMQ cluster

We would like to use the Publish / Subscribe abilities of NServiceBus with an MSMQ cluster. Let me explain in detail:
We have an SQL Server cluster that also hosts the MSMQ cluster. Besides SQL Server and MSMQ we cannot host any other application on this cluster. This means our subscriber is not allowed to run on the clsuter.
We have multiple application servers hosting different types of applications (going from ASP.NET MVC to SharePoint Server 2010). The goal is to do a pub/sub between all these applications.
All messages going through the pub/sub are critical and have an important value to the business. That's why we don't want local queues on the application server, but we want to use MSMQ on the cluster (in case we lose one of the application servers, we don't risk losing the messages since they are safe on the cluster).
Now I was assuming I could simply do the following at the subscriber side:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="MsmqTransportConfig" type="NServiceBus.Config.MsmqTransportConfig, NServiceBus.Core" />
....
</configSections>
<MsmqTransportConfig InputQueue="myqueue#server" ErrorQueue="myerrorqueue"
NumberOfWorkerThreads="1" MaxRetries="5" />
....
</configuration>
I'm assuming this used to be supported seen the documentation: http://docs.particular.net/nservicebus/messaging/publish-subscribe/
But this actually throws an exception:
Exception when starting endpoint, error has been logged. Reason:
'InputQueue' entry in 'MsmqTransportConfig' section is obsolete. By
default the queue name is taken from the class namespace where the
configuration is declared. To override it, use .DefineEndpointName()
with either a string parameter as queue name or Func parameter
that returns queue name. In this instance, 'myqueue#server' is defined
as queue name.
Now, the exception clearly states I should use the DefineEndpointName method:
Configure.With()
.DefaultBuilder()
.DefineEndpointName("myqueue#server")
But this throws an other exception which is documented (input queues should be on the same machine):
Exception when starting endpoint, error has been logged. Reason: Input
queue must be on the same machine as this process.
How can I make sure that my messages are safe if I can't use MSMQ on my cluster?
Dispatcher!
Now I've also been looking into the dispatcher for a bit and this doesn't seem to solve my issue either. I'm assuming also the dispatcher wouldn't be able to get messages from a remote input queue? And besides that, if the dispatcher dispatches messages to the workers, and the workers go down, my messages are lost (even though they were not processed)?
Questions?
To summarize, these are the things I'm wondering with my scenario in NServiceBus:
I want my messages to be safe on the MSMQ cluster and use a remote input queue. Is this something is should or shouldn't do? Is it possible with NServiceBus?
Should I use a dispatcher in this case? Can it read from a remote input queue? (I cannot run the dispatcher on the cluster)
What if the dispatcher dispatchers messages to the workers and one of the workers goes down? Do I lose the message(s) that were being processed?
Phill's comment is correct.
The thing is that you would get the type of fault tolerance you require practically by default if you set up a virtualized environment. In that case, the C drive backing the local queue of your processes is actually sitting on the VM image on your SAN.
You will need a MessageEndpointMappings section that you will use to point to the Publisher's input queue. This queue is used by your Subscriber to drop off subscription messages. This will need to be QueueName#ClusterServerName. Be sure to use the cluster name and not a node name. The Subscriber's input queue will be used to receive messages from the Publisher and that will be local, so you don't need the #servername.
There are 2 levels of failure, one is that the transport is down(say MSMQ) and the other is that the endpoint is down(Windows Service). In the event that the endpoint is down, the transport will handle persisting the messages to disk. A redundant network storage device may be in order.
In the event that the transport is down, assuming it is MSMQ the messages will backup on the Publisher side of things. Therefore you have to account for the size and number of messages to calculate how long you want to messages to backup for. Since the Publisher is clustered, you can be assured that the messages will arrive eventually assuming you planned your disk appropriately.

MSMQ Cluster losing messages on failover

I've got a MSMQ Cluster setup with nodes (active/passive) that share a drive.
Here are the tests I'm performing. I send messages to the queue that are recoverable. I then take the MSMQ cluster group offline and then bring it online again.
Result: The messages are still there.
I then simulate failover by moving the group to node 2. Moves over successfully, but the messages aren't there.
I'm sending the messages as recoverable and the MSMQ cluster group has a drive that both nodes can access.
Anyone?
More Info:
The Quorum drive stays only on node 1.
I have two service/app groups. One MSMQ and one that is a generic service group.
Even more info:
When node 1 is active, I pump it full of messages. Failover to node 2. 0 message in the queue for 02. Then I failover back to 01, and the messages are in 01.
You haven't clustered MSMQ or aren't using clustered MSMQ properly.
What you are looking at are the local MSMQ services.
http://blogs.msdn.com/b/johnbreakwell/archive/2008/02/18/clustering-msmq-applications-rule-1.aspx
Cheers
John
==================================
OK, maybe the drive letter being used isn't consistently implemented.
What is the storage location being used by clustered MSMQ?
If you open this storage location up in Explorer from Node 1 AND Node 2 at the same time, are the folder contents exactly the same? If you create a text file via Node 1's Explorer window, does it appear after a refresh in Node 2's Explorer window?