ActiveMQ Artemis: 2 node cluster not working with a queue - activemq-artemis

I have a very simple setup with 2 nodes (connected to each other).
I have a unit test that produces 10 messages on a queue, then consumes all the messages from the queue and then check that it received 10 messages.
Here is the producer and consumer setup
<bean id="queueListener" class="SessionAwareMessageListener<TextMessage>" />
<jms:listener-container container-type="default" destination-type="queue" connection-factory="brokerPooledConnectionFactory" acknowledge="auto">
<jms:listener destination="queue.private.mb.sanity.V4" ref="queueListener" />
</jms:listener-container>
<bean id="queuePublisher" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="brokerPooledConnectionFactory" />
<property name="defaultDestinationName" value="queue.private.mb.sanity.V4" />
</bean>
And the connection setup
<amq:connectionFactory
brokerURL="failover:(tcp://${broker1.host}:${broker1.port},tcp://${broker2.host}:${broker2.port})?maxReconnectAttempts=1&startupMaxReconnectAttempts=1"
closeTimeout="100" id="brokerConnectionFactory" />
<!-- Pools of connections -->
<bean id="brokerPooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" init-method="start" destroy-method="stop">
<property name="maxConnections" value="5" />
<property name="idleTimeout" value="0" />
<property name="connectionFactory" ref="brokerConnectionFactory" />
<property name="maximumActiveSessionPerConnection" value="100" />
<property name="timeBetweenExpirationCheckMillis" value="60000" />
<property name="expiryTimeout" value="600000" />
</bean>
What I see is that message are dispatched on the 2 nodes.
Message received on the 2nd node are all consumed.
Some messages received on the 1st node are routed to the 2nd node and consumed from the 2nd node.
The other messages received on the 1st node are neither routed nor consumed.
If I launch a consumer connecting directly to the 1st node, it is able to consume the messages.
The same test with a topic instead of a queue works fine.
Any idea why I'm facing that behavior?
Thanks
Nicolas

Indeed, it was a setup issue. The message-load-balancing was set to STRICT instead of ON_DEMAND.

Related

Connections remain in idle status and increase untill reached max connections limit

I have a web-app using apache-camel to submit routes which execute some postgresql select and insert.
I'm not using any DAO, so I haven't a code where begin and close connections, I believed that connections life-cycle was managed by Spring but it seems not working.
The problem is that everytime my route executes, I see one more connection which remains IDLE, so previous IDLE connections are not being reused, this takes to the "too many client connections problem"
In my route I have:
<bean id="configLocation" class="org.springframework.core.io.FileSystemResource">
<constructor-arg type="java.lang.String" value="..../src/main/resources/config/test.xml" />
</bean>
<bean id="dataSourcePostgres" class="org.apache.ibatis.datasource.pooled.PooledDataSource">
<property name="driver" value="org.postgresql.Driver" />
<property name="url" value="jdbc:postgresql://localhost:5432/postgres" />
<property name="username" value="postgres" />
<property name="password" value="postgres" />
</bean>
<bean id="postgresTrivenetaSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSourcePostgres" />
<property name="configLocation" ref="configLocation" />
</bean>
Here they are some sample queries:
<select id="selectTest" resultType="java.util.LinkedHashMap">
select * from test;
</select>
<insert id="insertTest" parameterType="java.util.LinkedHashMap" useGeneratedKeys="true" keyProperty="id" keyColumn="id">
INSERT INTO test(note,regop_id)
VALUES (#{note},#{idKey});
</insert>
I tried even adding this:
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSourcePostgresTriveneta" />
</bean>
At last I found the problem, it was that the DataSource is never closed automatically at the end of a Camel route.
So, each time that Camel route executed, it left an open Datasource, then all the created IDLE connections (their number obviously depends from the DataSource configuration and its usage) remained and accumulate over and over.
The final solution was to add a bean created ad hoc at the end of the Camel route, taking the DataSource as argument and closing it, that's all.

Spring Integration - Apache ActiveMQ to Kafka

I am using the below configuration for integrating activemq with kafka. I receive message from activemq and forwards it to kafka. However, i am noticing that messages are getting dequeued from JMS Queue but messages are not going to kafka.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jms="http://www.springframework.org/schema/integration/jms"
xmlns:integration="http://www.springframework.org/schema/integration"
xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
xmlns:task="http://www.springframework.org/schema/task"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/integration/jms
http://www.springframework.org/schema/integration/jms/spring-integration-jms.xsd
http://www.springframework.org/schema/integration/kafka
http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd">
<jms:message-driven-channel-adapter
id="helloJMSAdapater" destination="helloJMSQueue" connection-factory="jmsConnectionfactory"
channel="helloChannel" extract-payload="true" />
<integration:channel id="helloChannel" />
<integration:service-activator id="sayHelloServiceActivator"
input-channel="helloChannel" ref="sayHelloService" method="sayHello" />
<int-kafka:outbound-channel-adapter
id="kafkaOutboundChannelAdapter" kafka-template="template"
auto-startup="false" sync="true" channel="helloChannel" topic="test1234"
>
</int-kafka:outbound-channel-adapter>
<bean id="template" class="org.springframework.kafka.core.KafkaTemplate">
<constructor-arg>
<bean class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
<constructor-arg>
<map>
<entry key="bootstrap.servers" value="localhost:9092" />
<!--entry key="retries" value="5" /> <entry key="batch.size" value="16384"
/> <entry key="linger.ms" value="1" /> <entry key="buffer.memory" value="33554432"
/> < entry key="key.serializer" value="org.apache.kafka.common.serialization.StringSerializer"
/> <entry key="value.serializer" value="org.apache.kafka.common.serialization.StringSerializer"
/ -->
</map>
</constructor-arg>
</bean>
</constructor-arg>
</bean>
</beans>
Also, in case there is any issue from Kafka, it is not even reporting any exception stack trace.
Did i miss anything ?
Your messages are consumed by sayHelloServiceActivator.
So change your helloChannel channel type to
<publish-subscribe-channel id="helloChannel"/>
Default is DirectChannel
The DirectChannel has point-to-point semantics but otherwise is more
similar to the PublishSubscribeChannel than any of the queue-based
channel implementations described above. It implements the
SubscribableChannel interface instead of the PollableChannel
interface, so it dispatches Messages directly to a subscriber. As a
point-to-point channel, however, it differs from the
PublishSubscribeChannel in that it will only send each Message to a
single subscribed MessageHandler.
As #Hassen Bennour says, if you want to send a message to two consumers, you need a publish/subscribe channel.
That said, you have auto-startup="false" on the kafka adapter, so it won't even be subscribed to the channel.
If it was started, with your current configuration messages would be sent round-robin alternately to the service activator and adapter.

Camel The application attempted to use a JMS session after it had closed the session

I am new to camel and I am attempting to write an app that bridges Websphere MQ and Active MQ on JBoss EAP 7. The app deploys successfully works, I can drop messages on the Websphere queue, and it gets picked up by Active MQ. However I see error messages in the log showing it is attempting to use a connection after it is open.
15:48:57,814 ERROR [org.jboss.jca.core.connectionmanager.listener.TxConnectionListener] (Camel (camel) thread #1 - JmsConsumer[I0_TEST]) IJ000315: Pool IbmMQQueueFactory has 1 active handles
15:48:57,819 INFO [org.jboss.as.connector.deployers.RaXmlDeployer] (Camel (camel) thread #1 - JmsConsumer[I0_TEST]) wmq.jmsra.rar: MQJCA4016:Unregistered connection handle being closed: 'com.ibm.mq.connector.outbound.ConnectionWrapper#214da401'.
15:49:02,819 WARN [org.apache.camel.component.jms.DefaultJmsMessageListenerContainer] (Camel (camel) thread #1 - JmsConsumer[I0_TEST]) Setup of JMS message listener invoker failed for destination 'I0_TEST' - trying to recover. Cause: Local JMS transaction failed to commit; nested exception is com.ibm.msg.client.jms.DetailedIllegalStateException: MQJCA1020: The session is closed.
The application attempted to use a JMS session after it had closed the session.
Modify the application so that it closes the JMS session only after it has finished using the session.
Here is my applicationContext.xml
<bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:/ConnectionFactory" />
<property name="lookupOnStartup" value="false" />
<property name="cache" value="true" />
<property name="proxyInterface" value="javax.jms.ConnectionFactory" />
</bean>
<bean id="jmsTransactionManager"
class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManagerName" value="java:/TransactionManager" />
</bean>
<bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="jmsConnectionFactory" />
<property name="transacted" value="true" />
<property name="transactionManager" ref="jmsTransactionManager" />
</bean>
<bean id="wmqConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:/jms/IbmMQMsgQCF" />
<property name="lookupOnStartup" value="false" />
<property name="cache" value="true" />
<property name="proxyInterface" value="javax.jms.ConnectionFactory" />
</bean>
<bean id="wmqTransactionManager"
class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManagerName" value="java:/TransactionManager" />
</bean>
<bean id="wmq" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="wmqConnectionFactory" />
<property name="transacted" value="true" />
<property name="transactionManager" ref="wmqTransactionManager" />
</bean>
<bean id="routerlogger" class="org.jboss.as.quickstarts.mdb.RoutLogger" />
<camelContext trace="true" id="camel"
xmlns="http://camel.apache.org/schema/spring">
<route>
<from uri="wmq:websphereQueue"/>
<setExchangePattern pattern="InOnly"/>
<to uri="jms:activeQueue" pattern="InOnly" />
</route>
</camelContext>
Its a simple app, trying to determine what I'm missing.
I found this JBossDeveloper bug "JBEAP-2344: UserTransaction commit(), rollback() closes connection in Websphere MQ 7.5" which looks like it describes your issue and has comments pointing to documentation update "JBEAP-3535: Documentation: Add note about connection close on commit() and rollback() to Deploy the WebSphere MQ Resource Adapter subchapter".
Could you please add a note, that setting tracking="false", solves
problem with WebSphere MQ 7.5 and 8, where method commit() or
rollback() on UserTransaction closes any JMS connections which was
part of this transaction. This part is related to documenting known
limitation of WebSphere MQ in
JBEAP-3142.

Spring Integration: Mail error and HTTP gateway response

My use case is simple. I want to handle an exception caused by a system being unreachable, perform a retry based upon a configured retry policy, send an email when the retry threshold has been met, and return a custom response back the caller.
The challenges I am facing is that I cannot both send an email and return a response back to the caller. Since I was using a int-mail:outbound-channel-adapter initially, I would expect this behavior since this is a one-way component:
<int:chain input-channel="defaultErrorChannel">
<int:service-activator id="mailMessageActivator" expression="#mailHandler.process(payload)" />
<int-mail:outbound-channel-adapter mail-sender="mailSender" />
</int:chain>
However, if I introduce a int-amqp:outbound-gateway in front of the int-mail:outbound-channel-adapter (see the Error Handling config below), I would expect to be able to invoke a int:service-activator to construct and return a response to the caller.
Am I thinking about this the wrong way? I see that someone else had a similar question which is still unanswered. Both of the configurations I mentioned send emails, but always block from the caller without receiving a response upon timeout.
Here are the relevant parts of my configuration:
Gateway
<int:gateway id="customerGateway" service-interface="com.uscs.crm.integration.CustomerGateway"
default-request-channel="syncCustomers" default-reply-channel="replySyncCustomers" default-reply-timeout="30000">
</int:gateway>
<int:object-to-json-transformer input-channel="syncCustomers" output-channel="outboundRequestChannel" />
<int-http:outbound-gateway request-channel="outboundRequestChannel" reply-channel="replySyncCustomers"
url="http://voorhees148.uscold.com:9595/web/customerSync/createCustomer"
http-method="POST"
rest-template="restTemplate"
expected-response-type="com.uscs.crm.model.CustSyncResponseVO"
mapped-request-headers="Authorization, HTTP_REQUEST_HEADERS">
<int-http:request-handler-advice-chain>
<ref bean="retryWithBackoffAdviceSession" />
</int-http:request-handler-advice-chain>
</int-http:outbound-gateway>
Error Handling
<int:channel id="defaultErrorChannel"/>
<int:channel id="errorResponses"/>
<!--
ExponentialBackOffPolicy.multipler is applied to wait time over each retry attempt
with a ExponentialBackOffPolicy.maximum configured.
-->
<bean id="retryWithBackoffAdviceSession" class="org.springframework.integration.handler.advice.RequestHandlerRetryAdvice">
<property name="retryTemplate">
<bean class="org.springframework.retry.support.RetryTemplate">
<property name="backOffPolicy">
<bean class="org.springframework.retry.backoff.ExponentialBackOffPolicy">
<property name="initialInterval" value="2000" />
<property name="multiplier" value="2" />
<property name="maxInterval" value="30000"/>
</bean>
</property>
<property name="retryPolicy">
<bean class="org.springframework.retry.policy.SimpleRetryPolicy">
<property name="maxAttempts" value="3"/>
</bean>
</property>
</bean>
</property>
<property name="recoveryCallback">
<bean class="org.springframework.integration.handler.advice.ErrorMessageSendingRecoverer">
<constructor-arg ref="defaultErrorChannel"/>
</bean>
</property>
</bean>
<bean id="custSyncResponseHandler" class="com.uscs.crm.integration.handler.CustSyncResponseHandler"></bean>
<int:chain input-channel="defaultErrorChannel" output-channel="replySyncCustomers">
<int:service-activator id="mailMessageActivator" expression="#mailHandler.process(payload)" />
<int:header-enricher>
<int:header name="ERROR_ID" expression="T(java.lang.System).currentTimeMillis()"/>
</int:header-enricher>
<int-amqp:outbound-gateway
exchange-name="error-responses-exchange"
routing-key-expression="'error.response.'+headers.ERROR_ID"
amqp-template="amqpTemplate" />
<!-- Will this service-activator return a response to the caller (int:gateway) using channel `replySyncCustomers`? -->
<int:service-activator id="custSyncResponseActivator" expression="#custSyncResponseHandler.process(payload)" />
</int:chain>
<int-amqp:inbound-gateway queue-names="error-responses" request-channel="errorResponses"
connection-factory="rabbitConnectionFactory" acknowledge-mode="AUTO" />
<int-mail:outbound-channel-adapter channel="errorResponses" mail-sender="mailSender" />
<!-- (Outbound Channel Adapter/Gateway) rabbit exchanges, queues, and bindings used by this app -->
<rabbit:topic-exchange name="error-responses-exchange" auto-delete="false" durable="true">
<rabbit:bindings>
<rabbit:binding queue="error-responses" pattern="error.response.*"/>
</rabbit:bindings>
</rabbit:topic-exchange>
<rabbit:queue name="error-responses" auto-delete="false" durable="true"/>
SOLUTION: I was able to get this working with help from #Artem. Below are the changes I made.
Service Activator Implementation (handling ErrorMessage)
The key is the line which returns the reconstructed Message with all of the header information from the ErrorMessage.
#Override
public Message<CustSyncResponseVO> process(Message<MessagingException> errorMessage) {
MessagingException errorException = errorMessage.getPayload();
CustSyncResponseVO custSyncResponse = new CustSyncResponseVO();
custSyncResponse.setResponseMessage(ExceptionUtils
.convertToString(errorMessage.getPayload()));
return MessageBuilder.withPayload(custSyncResponse)
.copyHeaders(errorMessage.getHeaders())
.copyHeadersIfAbsent(errorException.getFailedMessage().getHeaders()).build();
}
Service Activator Config
Used SpEL to reference the #root context to retrieve the ErrorMessage instead of the default which would be MessagingException (payload) and passed it to my process method on the POJO.
<bean id="custSyncResponseHandler" class="com.uscs.crm.integration.handler.CustSyncResponseHandler" />
<int:chain id="errorGatewayResponseChain" input-channel="defaultErrorChannel" output-channel="replySyncCustomers">
<int:service-activator id="custSyncResponseActivator" expression="#custSyncResponseHandler.process(#root)" />
</int:chain>
I don't see reason to introduce the AMQP middleware complexity there just for sending email in the end.
What only you need is <publish-subscribe-channel id="defaultErrorChannel"> with to endpoints as subscribers to it.
The first one is one-way email sending <chain> and the second one is custSyncResponseActivator to reply something to your <int-http:outbound-gateway>.
You can find more info on the matter in the Spring Integration Reference Manual.

how partition reader from database, writer in different files and optimize thread load

EDIT:
I think that there are something wrong with this clause:
I tried to run my first test that runs single thread and take about 35 minutes with this whereCause and the execution is terribly slow. When I just do an select * from table, whitout whereClause the process happens normally.
I trying to use Step Partitioning in a Job with Spring Batch, but I dont realize if is it's
appropriate to my case:
I have read from a database with ~30 million records. In the record, I have a column bank_id and there is about 23 differents banks.
I have to read the value from this column and separate the records from each bank into different txt files.
I want my job parallelize the work in 4 or 8 threads, in a first moment I try to use step partitioning and I split the job in 4 slaves and set the id_bank that I process in a parameter for a query in SqlPagingQueryProviderFactoryBean and I use only 4 different Ids. But the amount of records from one bank_id to another varies widely resulting in a slave finish they job before anothers.
I want that when the slave finish they work, he begin to process another bank_id.
I need a help to do anything like this in spring batch. I use the 2.1 version of spring batch.
here is my files:
<bean id="arquivoWriter"
class="org.springframework.batch.item.file.FlatFileItemWriter"
scope="step">
<property name="encoding" value="ISO-8859-1" />
<property name="lineAggregator">
<bean
class="org.springframework.batch.item.file.transform.FormatterLineAggregator">
<property name="fieldExtractor">
<bean
class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names"
value="name_bank, id_bank, etc" />
</bean>
</property>
<property name="format"
value="..." />
</bean>
</property>
<property name="resource"
value="file:./arquivos/#{stepExecutionContext[faixa]}.txt" />
</bean>
<job id="partitionJob" xmlns="http://www.springframework.org/schema/batch">
<step id="masterStep">
<partition step="slave" partitioner="rangePartitioner">
<handler task-executor="taskExecutor" />
</partition>
</step>
</job>
<step id="slave" xmlns="http://www.springframework.org/schema/batch">
<tasklet>
<chunk reader="pagingReader" writer="arquivoWriter"
commit-interval="#{jobParameters['commit.interval']}" />
<listeners>
<listener ref="myChunkListener"></listener>
</listeners>
</tasklet>
</step>
<bean id="rangePartitioner" class="....RangePartitioner" />
<bean id="pagingReader"
class="org.springframework.batch.item.database.JdbcPagingItemReader"
scope="step">
<property name="dataSource" ref="dataSource" />
<property name="fetchSize" value="#{jobParameters['fetch.size']}"></property>
<property name="queryProvider">
<bean
class="org.springframework.batch.item.database.support.SqlPagingQueryProviderFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="selectClause">
<value>
<![CDATA[
SELECT ...
]]>
</value>
</property>
<property name="fromClause" value="FROM my_table" />
<property name="whereClause" value="where id_bank = :id_op" />
</bean>
</property>
<property name="parameterValues">
<map>
<entry key="id_op" value="#{stepExecutionContext[id_op]}" />
</map>
</property>
<property name="maxItemCount" value="#{jobParameters['max.rows']}"></property>
<property name="rowMapper">
<bean class="....reader.MyRowMapper" />
</property>
</bean>
The range partitioner:
public class RangePartitioner implements Partitioner {
#Autowired
BancoDao bancoDao;
final Map<String, ExecutionContext> result = new HashMap<String, ExecutionContext> ();
#Override
public Map<String, ExecutionContext> partition(int gridSize) {
List<OrgaoPagadorQuantidadeRegistrosTO> lista = bancoDao.findIdsOps();
for (OrgaoPagadorQuantidadeRegistrosTO op:lista){
String name = String.valueOf(op.getIdOrgaoPagador());
ExecutionContext ex = new ExecutionContext();
ex.putLong("id_op", op.getIdBank());
ex.putString ("faixa", name);
result.put("p"+name, ex);
}
return result;
}
}
What you're asking for should work assuming that you have enough work for each of the slaves to work on. For example, if you have 23 banks but one has 20 million records and the others each have 100,000, the slaves not working on the big bank will free up quickly.
Are you creating a StepExecution per bank or per thread? I'd recommend doing it per bank. This would allow threads to pick up work as they finish. Otherwise, you end up being responsible for that load balancing by implementing a Partitioner that does this normalization.