JBOSS messaging replicated queue - jboss

I am using JBOSS messaging in the following way:
1) Two JBOSS instances using 'all' config (i.e. clustered config)
2) One replicated queue created on each JBOSS instance wiht same JNDI name (clustered = true)
3) one producer attach locally to the queue on each instance (i.e. both the producer on both the nodes keep on adding messages to this replicated queue)
4) One JBOSS instance is marked as "consumer node" and queue message consumer is started on only this node (i.e. messages will be consumed on only one node). There is a logic which will decide which JBOSS instance is marked as "consumer node"
5) PostOffice used is clustered
6) server peer configured to not enforce message sequencing.
7) produced messages are non-persistent (deliveryMode = NON_PERSISTENT)
But I am facing problem with this. Messages produced on "non consumer node" do not get replicated to the queue on the "consumer node" and hence not available for consumption.
I enabled the logging and checked that postoffice finds two queues but only delivers to the local queue as it discovers that the remote queue is recoverable.
Any idea how to set it working?
FYI: I believe a message can be delivered to only one queue (local or remote). So, I want only one queue which is distributed but I am currently getting 2 different distributed queues (however their JNDI name is same). Is this a problem? If yes, how to solve this? Weblogic provides the option of creating a queue on admin server and thus a shared queue is possible there. What is the similar mechanism in JBOSS messaging? Or should I need to approach this problem as 2 queues which are synchronized. If yes, then how to achieve synchronization between them?
Thanks for taking out sometime to help me!!
Regards

Related

Messages are stuck in ActiveMQ Artemis cluster queues

We have a problem with Apache ActiveMQ Artemis cluster queues. Sometimes messages are beginning to pile up in the particular cluster queues. It usually happens 1-4 times per day and mostly on production (it was only one time for last 90 days when it has happened on one of the test environments).
These messages are not delivered to consumers on other cluster brokers until we restart cluster connector (or entire broker).
The problem looks related to ARTEMIS-3809.
Our setup is: 6 servers in one environment (3 pairs of master/backup servers). Operating system is Linux (Red Hat).
We have tried to:
upgrade from 2.22.0 to 2.23.1
increase minLargeMessageSize on the cluster connectors to 1024000
The messages are still being stuck in the cluster queues.
Another problem that I tried to configure min-large-message-size as it written in documentation (in cluster-connection), but it caused errors at start (broker.xml did not pass validation with xsd), so it was only option to specify minLargeMessageSize in the URL parameters of connector for each cluster broker. I don't know if this setting has effect.
So we had to make a script which checks if messages are stuck in the cluster queues and restarts cluster connector.
How can we debug this situation?
When the messages are stuck, nothing wrong is written to the log (no errors, no stacktraces etc.).
Which logging level (for what classes) should we enable to debug or trace level to find out what happens with the cluster connectors?
I believe you can remedy the situation by setting this on your cluster-connection:
<producer-window-size>-1</producer-window-size>
See ARTEMIS-3805 for more details.
Generally speaking, moving message around the cluster via the cluster-connection, while convenient, isn't terribly efficient (much less so for "large" messages). Ideally you would have a sufficient number of clients on each node to consume the messages that were originally produced there. If you don't have that many clients then you may want to re-evaluate the size of your cluster as it may actually decrease overall message throughput rather than increase it.
If you're just using 3 HA pairs in order to establish a quorum for replication then you should investigate the recently added pluggable quorum voting which allows integration with a 3rd party component (e.g. ZooKeeper) for leader election eliminating the need for a quorum of brokers.

How to obtain number of messages in clustered queue in JBoss EAP6/HornetQ

I am trying to count messages in HornetQ clustered queue on JBoss EAP 6.4 (domain mode)
Obtaining number of messages in particular HornetQ instance is not an problem (here is the way I do it), but what I am actually want, is to get cumulative/total number of messages of given queue in whole cluster.
Right now when I send to given queue 24604 messages, they are being nicely distributed to 3 nodes:
Node A: 8201 messages
Node B: 8202 messages
Node C: 8201 messages
Is there a way to count all messages of given queue in a cluster?
I have finally found a solution to obtain total number of messages in cluster by invoking broadcast ejb call on all cluster members, where each cluster member gets number of messages from InVm jms sender.
I have described it here:
http://jeefix.com/how-to-invoke-broadcast-ejb-at-all-jboss-eap6-ejb-cluster-members/
http://jeefix.com/managing-hornetq-queues-via-jms-api/

OpenShift message queue

I'd like to host apps that uses queue to communicate with each other on OpenShift.
One kind of apps - producers will put some data to the queue and another type - consumer will process the message. My question is how to implement message queue. I've thought about two approaches:
Create an app with JBoss, HornetQ and consumer and create proxy port for HornetQ, so that producers can send messages there.
Create an app with JBoss and consumer, and make a JBoss's HornetQ available to producers. It sounds a bit better for me, but I don't know if I can make queue available to producers and how it works if there are more instances of consumer on different nodes (and different JBoss instances).
I'm not sure how else to answer you besides showing you a link on how to use Wildfly. You can just use the Wildfly Cartridge:
https://www.openshift.com/quickstarts/wildfly-8
If you provide me some extra context I can try to enrich the answer a bit better. I need to know what is your problem, and what's not working.
If you just want to know how to configure Wildfly with HornetQ, the Wildfly cartridge I posted is the way to go.

What is the best practice with JMS Servers? Should it be deployed on Consumer or Producer side?

From architecture point of view, I was wondering what would be the best practice on an integration scenario with 2 application and OSB as Middleware: JMS Consumer runs over JBoss while OSB application encapsulates a service provider. Should JMS Queues reside on the JBoss (foreign server) or on WebLogic Server? That is, If I get to choose, JMS Server should be on consumer or producer side? What would be the pros and cons?
Thanks in advance.
It depends on your needs, you can create a foreign destination in your web logic server that connects to the producer queue in the producer end. In this arrangement your consumer will listen in the local end of the foreign destination that connects to the producer queue.
I can think of the following benefits:
A> Foreign Destinations are mapped to WebLogic JNDI tree, any MDB that you deploy to the server can simply reference the remote destination using its local JNDI name.
B> As you are directly communicating with the remote resource there is no lag / latency in delivery etc.
C> One issue maybe you will not be able to produce messages in the consuming end because this user may not have enqueue access to the queue. But it all depends on your setup. This maybe required for some cases like testing etc.

NServiceBus and remote input queues with an MSMQ cluster

We would like to use the Publish / Subscribe abilities of NServiceBus with an MSMQ cluster. Let me explain in detail:
We have an SQL Server cluster that also hosts the MSMQ cluster. Besides SQL Server and MSMQ we cannot host any other application on this cluster. This means our subscriber is not allowed to run on the clsuter.
We have multiple application servers hosting different types of applications (going from ASP.NET MVC to SharePoint Server 2010). The goal is to do a pub/sub between all these applications.
All messages going through the pub/sub are critical and have an important value to the business. That's why we don't want local queues on the application server, but we want to use MSMQ on the cluster (in case we lose one of the application servers, we don't risk losing the messages since they are safe on the cluster).
Now I was assuming I could simply do the following at the subscriber side:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="MsmqTransportConfig" type="NServiceBus.Config.MsmqTransportConfig, NServiceBus.Core" />
....
</configSections>
<MsmqTransportConfig InputQueue="myqueue#server" ErrorQueue="myerrorqueue"
NumberOfWorkerThreads="1" MaxRetries="5" />
....
</configuration>
I'm assuming this used to be supported seen the documentation: http://docs.particular.net/nservicebus/messaging/publish-subscribe/
But this actually throws an exception:
Exception when starting endpoint, error has been logged. Reason:
'InputQueue' entry in 'MsmqTransportConfig' section is obsolete. By
default the queue name is taken from the class namespace where the
configuration is declared. To override it, use .DefineEndpointName()
with either a string parameter as queue name or Func parameter
that returns queue name. In this instance, 'myqueue#server' is defined
as queue name.
Now, the exception clearly states I should use the DefineEndpointName method:
Configure.With()
.DefaultBuilder()
.DefineEndpointName("myqueue#server")
But this throws an other exception which is documented (input queues should be on the same machine):
Exception when starting endpoint, error has been logged. Reason: Input
queue must be on the same machine as this process.
How can I make sure that my messages are safe if I can't use MSMQ on my cluster?
Dispatcher!
Now I've also been looking into the dispatcher for a bit and this doesn't seem to solve my issue either. I'm assuming also the dispatcher wouldn't be able to get messages from a remote input queue? And besides that, if the dispatcher dispatches messages to the workers, and the workers go down, my messages are lost (even though they were not processed)?
Questions?
To summarize, these are the things I'm wondering with my scenario in NServiceBus:
I want my messages to be safe on the MSMQ cluster and use a remote input queue. Is this something is should or shouldn't do? Is it possible with NServiceBus?
Should I use a dispatcher in this case? Can it read from a remote input queue? (I cannot run the dispatcher on the cluster)
What if the dispatcher dispatchers messages to the workers and one of the workers goes down? Do I lose the message(s) that were being processed?
Phill's comment is correct.
The thing is that you would get the type of fault tolerance you require practically by default if you set up a virtualized environment. In that case, the C drive backing the local queue of your processes is actually sitting on the VM image on your SAN.
You will need a MessageEndpointMappings section that you will use to point to the Publisher's input queue. This queue is used by your Subscriber to drop off subscription messages. This will need to be QueueName#ClusterServerName. Be sure to use the cluster name and not a node name. The Subscriber's input queue will be used to receive messages from the Publisher and that will be local, so you don't need the #servername.
There are 2 levels of failure, one is that the transport is down(say MSMQ) and the other is that the endpoint is down(Windows Service). In the event that the endpoint is down, the transport will handle persisting the messages to disk. A redundant network storage device may be in order.
In the event that the transport is down, assuming it is MSMQ the messages will backup on the Publisher side of things. Therefore you have to account for the size and number of messages to calculate how long you want to messages to backup for. Since the Publisher is clustered, you can be assured that the messages will arrive eventually assuming you planned your disk appropriately.