How to configure standalone HornetQ along with EAP 6.3 or Jboss 7 for durable JMS subscription? - jboss

I would like to send JMS messages from one Jboss server to another but through a standalone HornetQ server.
This way I can have messages delivered later in case of the destination server crash (provided durable subscriptions).
However I have already messages routed internally at each Jboss. I would like a configuration that will not conflict with those.
The topology of the desired solution is visualised on the diagram.
How to achieve this configuration?

Lets start from configuring the standalone HornetQ. You download the standalone server from their download page.
Next you have to configure the topic. Please add your topic in %HORNETQ-HOME%\config\stand-alone\non-clustered\hornetq-jms.xml file:
<topic name="Topic1">
<entry name="java:/topic/Topic1"/>
</topic>
You probably want to test the configuration on one machine first, so I recommend changing the port at which HornetQ will be listening for messages from 5455 to 5456.
Please edit %HORNETQ-HOME%\config\stand-alone\non-clustered\hornetq-configuration.xml file to change these ports. You also want to be able to register durable subscribers, so add these two lines to the <security-setting match="#"> element in the same file:
<permission type="createDurableQueue" roles="guest"/>
<permission type="deleteDurableQueue" roles="guest"/>
Then start the standalone HornetQ by running %HORNETQ-HOME%\run.bat.
First we gonna see how to send a message to this newly created topic. For this we need a designated connection factory on the Jboss Server 1. In the jboss:domain:messaging subsystem of %JBOSS-HOME1%\standalone\configuration\standalone-full.xml, please add a new pooled connection factory:
<pooled-connection-factory name="StandaloneHornetQConnectionFactory">
<transaction mode="xa"/>
<connectors>
<connector-ref connector-name="standalone-hornetq-connector"/>
</connectors>
<entries>
<entry name="java:jboss/exported/jms/StandaloneHornetQConnectionFactory"/>
</entries>
</pooled-connection-factory>
From now on you have to use this connection factory when you want to send a message to Topic 1. This is usually done by dependency injection:
#Resource(lookup = "java:jboss/exported/jms/StandaloneHornetQConnectionFactory")
private ConnectionFactory connectionFactory;
As you can see above we referenced standalone-hornetq-connector but don't have one yet. Lets create it by adding another netty connector into <connectors>:
<connectors>
<netty-connector name="standalone-hornetq-connector" socket-binding="standalone-hornetq-socket"/>
<netty-connector name="netty" socket-binding="messaging"/>
<netty-connector name="netty-throughput" socket-binding="messaging-throughput">
<param key="batch-delay" value="50"/>
</netty-connector>
<in-vm-connector name="in-vm" server-id="0"/>
</connectors>
As you can see we need standalone-hornetq-socket socket binding. Lets create it in <socket-binding-group> subelement:
<outbound-socket-binding name="standalone-hornetq-socket">
<remote-destination host="localhost" port="5446"/>
</outbound-socket-binding>
As you can see, this is an outbound socket binding that will be used to send messages to our HornetQ Standalone server that is listening on the 5446 port. This configuration is enough on Jboss Server 1 to send messages to Jboss Server 2 via the Standalone HornetQ server.
To be able to receive the messages on Jboss Server 2, we have to once again repeat above configuration in the %JBOSS-HOME1%\standalone\configuration\standalone-full.xml. However this time we offset the ports of the Jboss Server 2 by port-offset:3, to be able to work on the same machine:
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:3}">
This step is not nesessary if you will be putting the servers in the separate machines (If you do, please change localhost accordingly ;) ).
Now we can create MDB that will be a durable subscriber of the Topic 1.
#MessageDriven(name = "MyDurableSubscriber", activationConfig = {
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic"),
#ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "java:/topic/Topic1"),
#ActivationConfigProperty(propertyName = "subscriptionDurability", propertyValue = "Durable"),
#ActivationConfigProperty(propertyName = "subscriptionName", propertyValue = "Topic1Subscription"),
#ActivationConfigProperty(propertyName = "clientId", propertyValue = "MySubscriber"),
})
#ResourceAdapter("StandaloneHornetQConnectionFactory")
public class MyDurableSubscriber implements MessageListener {
#Override
public void onMessage(Message message) {
// ...
}
}
The #ResourceAdapter("StandaloneHornetQConnectionFactory") line is the most important, because by default all MDBs are using hornetq-ra resource adapter to subscribe (local subscription). The ResourceAdapter annotation is from org.jboss.ejb3.annotation package
and you can make this class available to you via maven dependency:
<dependency>
<groupId>org.jboss.ejb3</groupId>
<artifactId>jboss-ejb3-ext-api</artifactId>
<version>2.1.0</version>
<scope>provided</scope>
</dependency>
With these all set you can enjoy durable subscriptions with "star" topology of your servers.

Related

Make Artemis Slave Replication Use SSL

In Artemis when using replication to keep master/slave pairs synchronized the data will be replicated to the slave using a 'connection'.
I want to ensure this replication connection is encrypted. I suspect that this is done by using SSL on the connectors section of the broker.xml. However digging through the guides/official docs does not explicitly state how this is done. Yeah I can go waddling through source code and play with settings and try and sniff the traffic just thought asking here might be a bit easier.
Lets assume I have just a master/slave pair for now(I know not good for split brain but lets keep it simple for now) and will be using static connection lists as UDP is not allowed in my data center I have the following setup.
<connectors xmlns="urn:activemq:core">
<connector name="master">
tcp://master:61616?sslEnabled=true;keyStorePath=/d1/usr/dltuser/keystore/qcsp6ab2001.jks;keyStorePassword=1q2w3e4r;needClientAuth=true;trustStorePath=/d1/usr/dltuser/keystore/qcsp6ab2001_trust.jks;truststorepassword=1q2w3e4r
</connector>
<connector name="slave">
tcp://slave:61616?sslEnabled=true;keyStorePath=/d1/usr/dltuser/keystore/qcsp6ab2001.jks;keyStorePassword=1q2w3e4r;needClientAuth=true;trustStorePath=/d1/usr/dltuser/keystore/qcsp6ab2001_trust.jks;truststorepassword=1q2w3e4r
</connector>
</connectors>
<cluster-connections>
<cluster-connection name="amq-cluster">
<connector-ref>master</connector-ref>
<retry-interval>500</retry-interval>
<retry-interval-multiplier>1.1</retry-interval-multiplier>
<max-retry-interval>5000</max-retry-interval>
<initial-connect-attempts>-1</initial-connect-attempts>
<reconnect-attempts>-1</reconnect-attempts>
<forward-when-no-consumers>false</forward-when-no-consumers>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>master</connector-ref>
<connector-ref>slave</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<replication>
<master>
<check-for-live-server>true</check-for-live-server>
<!-- what master/slave group is this broker part of, master and slave must match -->
<group-name>nft-group-1</group-name>
<!-- does the broker initiate a quorum vote if connection to slave fails -->
<vote-on-replication-failure>true</vote-on-replication-failure>
<!-- how many votes should backup intiate when requesting a quorum?-->
<vote-retries>5</vote-retries>
<!-- how long should the broker wait between vote retries -->
<vote-retry-wait>5000</vote-retry-wait>
<vote-on-replication-failure>true</vote-on-replication-failure>
<cluster-name>amq-cluster</cluster-name>
</master>
</replication>
</ha-policy>
From my understanding the connectors will be used when forming the master slave pairs and then the replication will be done via SSL using the configuration from connectors section is this the case?
From my understanding the connectors will be used when forming the master slave pairs and then the replication will be done via SSL using the configuration from connectors section is this the case?
Yes, that is the case.

Redelivering JMS messages from the DLQ

I have two components communicating over an jms queue in a wildfly instance. As soon as the consumer of the queue disconnects or gets stopped, the messages are forwarded to the DLQ (at least when wildfly is restarted).
Is it possible to configure wildfly to automatically redeliver the messages from DLQ as soon as a consumer reconnects to the queue?
Some details
Wildfly version: 8.2.0
standalone.xml - As far as I can tell, nothing special
<jms-destinations>
<jms-queue name="ExpiryQueue">
<entry name="java:/jms/queue/ExpiryQueue"/>
<durable>false</durable>
</jms-queue>
<jms-queue name="DLQ">
<entry name="java:/jms/queue/DLQ"/>
<durable>false</durable>
</jms-queue>
...
<jms-queue name="Q1-Producer-to-Consumer">
<entry name="java:/queue/Q1-Producer-to-Consumer"/>
<entry name="java:jboss/exported/queue/Q1-Producer-to-Consumer"/>
<durable>false</durable>
</jms-queue>
</jms-destinations>
Thanks.
The DLQ only gets messages that have thrown an exception during message processing. If a consumer disconnects, the messages will just still be sitting there awaiting delivery
If you are seeing an Issue whereby during a server restart messages hit the DLQ, this would suggest that your consumer is consuming messages before the resources it requires are available, so is erroring when processing the messages. You would be better to fix your consumer to not start consuming messages to early, rather than trying to fish the failed messages back from DLQ

ActiveMQ Artemis Producer performance issue with 30k address/queues

I have an application that needs to send messages to ~30k address. I used to have OpenMQ as my broker and use JMS. The JMS client code that I have was capable of sending ~30k messages to the broker within few seconds. Then I switched to Artemis and the send performance degraded drastically. Now it takes up to 10 minutes to send all the messages to Artemis broker.
I played with different configuration settings both on the server (broker.xml) and also via connection URL.
Address setting on broker.xml
<address-setting match="#">
<!-- <dead-letter-address>DLQ</dead-letter-address> -->
<!-- <expiry-address>ExpiryQueue</expiry-address> -->
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>DROP</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
Sending
private static final String BROKER_URL = "tcp://localhost:61616?minLargeMessageSize=10485760;compressLargeMessages=true;producerWindowSize=-1";
ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(BROKER_URL);
factory.setDupsOKBatchSize(50 * 1024 * 1024);
factory.setProducerWindowSize(-1);
factory.setBlockOnDurableSend(false);
factory.setBlockOnNonDurableSend(false);
factory.setCacheDestinations(true);
System.out.println("Connection factory " + factory.toString());
Connection connection = factory.createConnection();
Session producerSession = connection.createSession(false, ActiveMQJMSConstants.PRE_ACKNOWLEDGE);
MessageProducer producer = producerSession.createProducer(null);
producer.setDisableMessageID(true);
producer.setDisableMessageTimestamp(true);
// Create a JMS message of type ByteMessage, and send
producer.send(destination, jmsMessage, DeliveryMode.NON_PERSISTENT, JMS_MESSAGE_PRIORITY, DEFAULT_TTL);
With above code (simplified for readability) I only get 20 to 40 messages per second.
Am I missing something? Or is it some performance penalty introduced by JMS to Core conversion from Artemis client library?
Is there anything I can do to improve the message rate?

Message redistribution on ArtemisMQ 2.x does not work

I would like to enable message redistribution on my 2-nodes cluster with static hosts. But it does not seem to work.
1) I have 10 producers that write to the queue "MyTestQueue" on node 1 (but no consumers).
2) I have 1 consumer on node 2 (but no producers) that consumes messages from node 2.
I expect that node 1 will redistribute the messages to node 2 where the consumer exists, but it does not. The message count on node 1 is still equal the amount of messages that was sent to node 1.
I have the following configuration in my broker.xml that sets forward-when-no-consumers to false.
I also have set redistribution-delay to a value of zero.
<jms xmlns="urn:activemq:jms">
<queue name="MyTestQueue"/>
</jms>
...
<cluster-connections>
<cluster-connection name="my-test-cluster">
<address>jms</address>
<connector-ref>server0-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<forward-when-no-consumers>false</forward-when-no-consumers>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<confirmation-window-size>1024</confirmation-window-size>
<static-connectors>
<connector-ref>server1-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
...
<address-settings>
<address-setting match="#">
<redelivery-delay>5000</redelivery-delay>
<redelivery-delay-multiplier>3</redelivery-delay-multiplier>
<max-redelivery-delay>10000</max-redelivery-delay>
<max-delivery-attempts>10</max-delivery-attempts>
<max-size-bytes>104857600</max-size-bytes>
<page-size-bytes>10485760</page-size-bytes>
<address-full-policy>PAGE</address-full-policy>
<redistribution-delay>0</redistribution-delay>
</address-setting>
</address-settings>
How can I get the message redistribution to work?
This might be related to a known issue. There is a situation on which the broker fails to load balance the messages if they don't contain the application properties field.
Could you, please, try with that?
Couple of things...
What version of Artemis are you using? Have you tried reproducing this with version 2.2.0?
What kind of client are you using (e.g. JMS, AMQP, STOMP, etc.)?
Do you have a reproducible test-case (e.g. a modified version of one of the examples shipped with Artemis)?
The configuration element <forward-when-no-consumers> is not valid in Artemis (although it was in older versions of HornetQ).
Remove the <address>jms</address> from cluster connection configuration - each cluster connection only applies to addresses that match the specified address. And make sure that you're using compatible client because messages from 1.x clients to 2.x cluster are lost when the are load balanced to nodes with matching consumers.
Here's official, working example of ActiveMQ Artemis configuration with symmetric cluster, on demand load balancing and message redistribution

Spring Integration - Kafka Outbound Adapter Acknowledge Issue

Before i post my question, i would like to thank Gary and Artem for helping me in resolving my issues and bcoz of that i am able to successfuly post messages from JMS to Kafka with transaction in place.
Now, i am facing another issue and testing what will happen when my Kafka is down.
When kafka is down for first few retries kafka outbound adapter throws exception and messages are returned back to JMS and retried again and again.
However, after couple of retries , even when kafka is down, messages are dequeued from JMS and i get the following exception:
2017-07-10 23:27:51.117 ERROR 16116 --- [enerContainer-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='Test JPMC' to topic test:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
My integration xml is :
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jms="http://www.springframework.org/schema/integration/jms"
xmlns:integration="http://www.springframework.org/schema/integration"
xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
xmlns:task="http://www.springframework.org/schema/task"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/integration/jms
http://www.springframework.org/schema/integration/jms/spring-integration-jms.xsd
http://www.springframework.org/schema/integration/kafka
http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd">
<jms:message-driven-channel-adapter
id="helloJMSAdapater" container="requestListenerContainer"
channel="helloChannel" extract-payload="true" error-channel="errorChannel"/>
<integration:recipient-list-router
input-channel="errorChannel">
<integration:recipient channel="errorOutputChannel" />
<integration:recipient channel="rethrowChannel" />
</integration:recipient-list-router>
<jms:outbound-channel-adapter id="errorQueueChannelAdapter"
channel="errorOutputChannel" destination="errorQueue" connection-factory="jmsConnectionfactory"
delivery-persistent="true" explicit-qos-enabled="true" />
<int-kafka:outbound-channel-adapter
id="kafkaOutboundChannelAdapter" kafka-template="kafkaTemplate"
auto-startup="true" sync="true" channel="inputToKafka" topic="test">
</int-kafka:outbound-channel-adapter>
</beans>
I dont want to acknowledge the JMS messages unless they are successfully posted into kafka.
Is it because of some default parameters that kafka is setting?
My kafka Config is below:
#Configuration
#Component
public class KafkaConfig {
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");//this.brokerAddress);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
// set more properties
return new DefaultKafkaProducerFactory<>(props);
}
}
That isn't Kafka problem. If you say your message is "dequeued from JMS", be sure that the redelivery policy on the queue is configured to infinite.
For example ActiveMQ story is here: http://activemq.apache.org/redelivery-policy.html
maximumRedeliveries 6 Sets the maximum number of times a message will be redelivered before it is considered a poisoned pill and returned to the broker so it can go to a Dead Letter Queue.
Set to -1 for unlimited redeliveries.