MSMQ in cluster - msmq

I configured MSMQ to run in cluster. Cluster consists of two Hyper-V virtual machines and uses common storage on third virtual machine (all virtual machines share windows domain and they see each other through the network). The fail-over cluster manager snap-in shows that MSMQ service is running. The non-clustered MSMQ services on machines, which are members of cluster, are shown to be runnung in services snap-in. Now I try to send message from remote computer (from third virtual machine) to clustered MSMQ service and to non-clustered MSMQ services. I use the following queue names:
FormatName:Direct=OS:{clustered-msmq-netbios-name}\private$\{queueName}
FormatName:Direct=TCP:{clustered-msmq-ip-address}\private$\{queueName}
FormatName:Direct=TCP:{non-clustered-msmq-ip-address}\private$\{queueName}
When non-clustered msmq ip address is specified the message is delivered to non-clustered msmq instance. But when I try to access clustered msmq instance the sent message stays in outgoing message queue and it says "Waiting to connect" (Failed to connect Winsock socket). And queue on clustered msmq instance is empty.
I tried to connect to clustered msmq service with telnet. For connection I specified clustered msmq ip-address and port 1801. It says "Could not open connection to the host, on port 1801: Connect failed".
Any idea?
Additional information. When I click on "Manage Message Queueing" menu item when both cluster servers are online then in the snap-in there is no Message Queue item in the tree. When I pause one server (the second) there appears Message Queueing item in the tree. And when there is a Message Queueing item in the tree the messages start to be processed (I see them disappear from outgoing message queue on sending server, but I don't see them appear on receiving server).

It seems that you can manage clustered message queueing only from that cluster node, which is now owner of role. On cluster node, which now is not active, there is no "Manage Message Queueing" menu item.
And considering the issue that messages were not delivered to clustered msmq instance I just reinstalled msmq windows feature on one of cluster nodes and recreated msmq cluster role. After these manipulations delivering messages just started working.

Related

Client Local Queue in Red Hat AMQ

We have a network of Red Hat AMQ 7.2 brokers with Master/Slave configuration. The client application publish / subscribe to topics on the broker cluster.
How do we handle the situation wherein the network connectivity between the client application and the broker cluster goes down? Does Red Hat AMQ have a native solution like client local queue and a jms to jms bridge between local queue and remote broker so that network connectivity failure will not result in loss of messages.
It would be possible for you to craft a solution where your clients use a local broker and that local broker bridges messages to the remote broker. The local broker will, of course, never lose network connectivity with the local clients since everything is local. However, if the local broker loses connectivity with the remote broker it will act as a buffer and store messages until connectivity with the remote broker is restored. Once connectivity is restored then the local broker will forward the stored messages to the remote broker. This will allow the producers to keep working as if nothing has actually failed. However, you would need to configure all this manually.
That said, even if you don't implement such a solution there is absolutely no need for any message loss even when clients encounter a loss of network connectivity. If you send durable (i.e. persistent) messages then by default the client will wait for a response from the broker telling the client that the broker successfully received and persisted the message to disk. More complex interactions might require local JMS transactions and even more complex interactions may require XA transactions. In any event, there are ways to eliminate the possibility of message loss without implementing some kind of local broker solution.

How to see what kafka client application connected from a certain ip address?

Although kafka clients are authenticated, and can be restricted (authorized) to connect only from allowed ip addresses, on these app servers multiple applications may be deployed, and it would be beneficial if kafka admin could somehow match certain connection (visible only via netstat on kafka server machine!) to a certain application, either by an application tag passed explicitely by kafka client, or by command name that started the kafka client application passed by the local client operating system to a kafka client (and visible via ps command on unix), and passed through kafka client to kafka broker. Is there already such a possibility?
That would imply that connections are held as browsable objects within kafka, somewhere, either in some internal topic, or in its zookeeper.
Alternatively, at least displaying the authorized principal that initiated connection would also do. That is a question for both consumers and producers.

NServiceBus and remote input queues with an MSMQ cluster

We would like to use the Publish / Subscribe abilities of NServiceBus with an MSMQ cluster. Let me explain in detail:
We have an SQL Server cluster that also hosts the MSMQ cluster. Besides SQL Server and MSMQ we cannot host any other application on this cluster. This means our subscriber is not allowed to run on the clsuter.
We have multiple application servers hosting different types of applications (going from ASP.NET MVC to SharePoint Server 2010). The goal is to do a pub/sub between all these applications.
All messages going through the pub/sub are critical and have an important value to the business. That's why we don't want local queues on the application server, but we want to use MSMQ on the cluster (in case we lose one of the application servers, we don't risk losing the messages since they are safe on the cluster).
Now I was assuming I could simply do the following at the subscriber side:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="MsmqTransportConfig" type="NServiceBus.Config.MsmqTransportConfig, NServiceBus.Core" />
....
</configSections>
<MsmqTransportConfig InputQueue="myqueue#server" ErrorQueue="myerrorqueue"
NumberOfWorkerThreads="1" MaxRetries="5" />
....
</configuration>
I'm assuming this used to be supported seen the documentation: http://docs.particular.net/nservicebus/messaging/publish-subscribe/
But this actually throws an exception:
Exception when starting endpoint, error has been logged. Reason:
'InputQueue' entry in 'MsmqTransportConfig' section is obsolete. By
default the queue name is taken from the class namespace where the
configuration is declared. To override it, use .DefineEndpointName()
with either a string parameter as queue name or Func parameter
that returns queue name. In this instance, 'myqueue#server' is defined
as queue name.
Now, the exception clearly states I should use the DefineEndpointName method:
Configure.With()
.DefaultBuilder()
.DefineEndpointName("myqueue#server")
But this throws an other exception which is documented (input queues should be on the same machine):
Exception when starting endpoint, error has been logged. Reason: Input
queue must be on the same machine as this process.
How can I make sure that my messages are safe if I can't use MSMQ on my cluster?
Dispatcher!
Now I've also been looking into the dispatcher for a bit and this doesn't seem to solve my issue either. I'm assuming also the dispatcher wouldn't be able to get messages from a remote input queue? And besides that, if the dispatcher dispatches messages to the workers, and the workers go down, my messages are lost (even though they were not processed)?
Questions?
To summarize, these are the things I'm wondering with my scenario in NServiceBus:
I want my messages to be safe on the MSMQ cluster and use a remote input queue. Is this something is should or shouldn't do? Is it possible with NServiceBus?
Should I use a dispatcher in this case? Can it read from a remote input queue? (I cannot run the dispatcher on the cluster)
What if the dispatcher dispatchers messages to the workers and one of the workers goes down? Do I lose the message(s) that were being processed?
Phill's comment is correct.
The thing is that you would get the type of fault tolerance you require practically by default if you set up a virtualized environment. In that case, the C drive backing the local queue of your processes is actually sitting on the VM image on your SAN.
You will need a MessageEndpointMappings section that you will use to point to the Publisher's input queue. This queue is used by your Subscriber to drop off subscription messages. This will need to be QueueName#ClusterServerName. Be sure to use the cluster name and not a node name. The Subscriber's input queue will be used to receive messages from the Publisher and that will be local, so you don't need the #servername.
There are 2 levels of failure, one is that the transport is down(say MSMQ) and the other is that the endpoint is down(Windows Service). In the event that the endpoint is down, the transport will handle persisting the messages to disk. A redundant network storage device may be in order.
In the event that the transport is down, assuming it is MSMQ the messages will backup on the Publisher side of things. Therefore you have to account for the size and number of messages to calculate how long you want to messages to backup for. Since the Publisher is clustered, you can be assured that the messages will arrive eventually assuming you planned your disk appropriately.

JBOSS messaging replicated queue

I am using JBOSS messaging in the following way:
1) Two JBOSS instances using 'all' config (i.e. clustered config)
2) One replicated queue created on each JBOSS instance wiht same JNDI name (clustered = true)
3) one producer attach locally to the queue on each instance (i.e. both the producer on both the nodes keep on adding messages to this replicated queue)
4) One JBOSS instance is marked as "consumer node" and queue message consumer is started on only this node (i.e. messages will be consumed on only one node). There is a logic which will decide which JBOSS instance is marked as "consumer node"
5) PostOffice used is clustered
6) server peer configured to not enforce message sequencing.
7) produced messages are non-persistent (deliveryMode = NON_PERSISTENT)
But I am facing problem with this. Messages produced on "non consumer node" do not get replicated to the queue on the "consumer node" and hence not available for consumption.
I enabled the logging and checked that postoffice finds two queues but only delivers to the local queue as it discovers that the remote queue is recoverable.
Any idea how to set it working?
FYI: I believe a message can be delivered to only one queue (local or remote). So, I want only one queue which is distributed but I am currently getting 2 different distributed queues (however their JNDI name is same). Is this a problem? If yes, how to solve this? Weblogic provides the option of creating a queue on admin server and thus a shared queue is possible there. What is the similar mechanism in JBOSS messaging? Or should I need to approach this problem as 2 queues which are synchronized. If yes, then how to achieve synchronization between them?
Thanks for taking out sometime to help me!!
Regards

MSMQ Cluster losing messages on failover

I've got a MSMQ Cluster setup with nodes (active/passive) that share a drive.
Here are the tests I'm performing. I send messages to the queue that are recoverable. I then take the MSMQ cluster group offline and then bring it online again.
Result: The messages are still there.
I then simulate failover by moving the group to node 2. Moves over successfully, but the messages aren't there.
I'm sending the messages as recoverable and the MSMQ cluster group has a drive that both nodes can access.
Anyone?
More Info:
The Quorum drive stays only on node 1.
I have two service/app groups. One MSMQ and one that is a generic service group.
Even more info:
When node 1 is active, I pump it full of messages. Failover to node 2. 0 message in the queue for 02. Then I failover back to 01, and the messages are in 01.
You haven't clustered MSMQ or aren't using clustered MSMQ properly.
What you are looking at are the local MSMQ services.
http://blogs.msdn.com/b/johnbreakwell/archive/2008/02/18/clustering-msmq-applications-rule-1.aspx
Cheers
John
==================================
OK, maybe the drive letter being used isn't consistently implemented.
What is the storage location being used by clustered MSMQ?
If you open this storage location up in Explorer from Node 1 AND Node 2 at the same time, are the folder contents exactly the same? If you create a text file via Node 1's Explorer window, does it appear after a refresh in Node 2's Explorer window?