Spring Integration - Apache ActiveMQ to Kafka - apache-kafka

I am using the below configuration for integrating activemq with kafka. I receive message from activemq and forwards it to kafka. However, i am noticing that messages are getting dequeued from JMS Queue but messages are not going to kafka.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jms="http://www.springframework.org/schema/integration/jms"
xmlns:integration="http://www.springframework.org/schema/integration"
xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
xmlns:task="http://www.springframework.org/schema/task"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/integration/jms
http://www.springframework.org/schema/integration/jms/spring-integration-jms.xsd
http://www.springframework.org/schema/integration/kafka
http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd">
<jms:message-driven-channel-adapter
id="helloJMSAdapater" destination="helloJMSQueue" connection-factory="jmsConnectionfactory"
channel="helloChannel" extract-payload="true" />
<integration:channel id="helloChannel" />
<integration:service-activator id="sayHelloServiceActivator"
input-channel="helloChannel" ref="sayHelloService" method="sayHello" />
<int-kafka:outbound-channel-adapter
id="kafkaOutboundChannelAdapter" kafka-template="template"
auto-startup="false" sync="true" channel="helloChannel" topic="test1234"
>
</int-kafka:outbound-channel-adapter>
<bean id="template" class="org.springframework.kafka.core.KafkaTemplate">
<constructor-arg>
<bean class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
<constructor-arg>
<map>
<entry key="bootstrap.servers" value="localhost:9092" />
<!--entry key="retries" value="5" /> <entry key="batch.size" value="16384"
/> <entry key="linger.ms" value="1" /> <entry key="buffer.memory" value="33554432"
/> < entry key="key.serializer" value="org.apache.kafka.common.serialization.StringSerializer"
/> <entry key="value.serializer" value="org.apache.kafka.common.serialization.StringSerializer"
/ -->
</map>
</constructor-arg>
</bean>
</constructor-arg>
</bean>
</beans>
Also, in case there is any issue from Kafka, it is not even reporting any exception stack trace.
Did i miss anything ?

Your messages are consumed by sayHelloServiceActivator.
So change your helloChannel channel type to
<publish-subscribe-channel id="helloChannel"/>
Default is DirectChannel
The DirectChannel has point-to-point semantics but otherwise is more
similar to the PublishSubscribeChannel than any of the queue-based
channel implementations described above. It implements the
SubscribableChannel interface instead of the PollableChannel
interface, so it dispatches Messages directly to a subscriber. As a
point-to-point channel, however, it differs from the
PublishSubscribeChannel in that it will only send each Message to a
single subscribed MessageHandler.

As #Hassen Bennour says, if you want to send a message to two consumers, you need a publish/subscribe channel.
That said, you have auto-startup="false" on the kafka adapter, so it won't even be subscribed to the channel.
If it was started, with your current configuration messages would be sent round-robin alternately to the service activator and adapter.

Related

Data streaming in Apache-Ignite through Apache-Kafka consumes high CPU

Data streaming in Apache-Ignite through Apache-Kafka consumes high CPU
I am using Ignite Source Connector (Single Kafka connector node) to export the Ignite Events from my ignite cluster(2 Nodes) to Kafka broker nodes (2 Nodes) with single topic and 29 partitions.
I have processed 0.1 Million events (PUT, DELETE) per minute with the 1k size messages(in Avg).
My connector node consumes 90% of the CPU (you can see it below).
My Connector node machine config.
RAM --> 30 GB
ROM --> 1TB
Configured Heap --> 15GB
No of CPU Core --> 40 (20 x 2)
#connector
name=my-ignite-source-connector
connector.class=class of my source connector
tasks.max=1
topicNames=ignite-data
#cache
cacheName=DOCIDS
cacheAllowOverwrite=true
cacheEvts=put , removed
evtBatchSize=100
numberOfPartitions=29
igniteCfg=/myconfig/ignite-config.xml
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean abstract="true" id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Set to true to enable distributed class loading for examples, default is false. -->
<property name="communicationSpi"><bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi"><property name="socketWriteTimeout" value="60000"/></bean></property><property name="peerClassLoadingEnabled" value="true"/><property name="clientFailureDetectionTimeout" value="10000"/><property name="dataStorageConfiguration"><bean class="org.apache.ignite.configuration.DataStorageConfiguration"><property name="walPath" value="wal_data/"/><property name="walArchivePath" value="wal_data/"/><property name="defaultDataRegionConfiguration"><bean class="org.apache.ignite.configuration.DataRegionConfiguration"><property name="persistenceEnabled" value="true"/><property name="maxSize" value="#{15L * 1024 * 1024 * 1024}"/></bean></property></bean></property>
<!-- Enable task execution events for examples. -->
<property name="includeEventTypes">
<list>
<!--Task execution events-->
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_STARTED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED"/>
<!--Cache events-->
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>ignite-server-node1:47500..47509</value><value>ignite-server-node2:47500..47509</value><value>kafka-conect-node:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
The data transfer process is working as intended only for a few hours because of this high CPU consumption, after that the Kafka connector node will be OFFLINE in ignite server topology.
Any pointers will help.
Thanks in advance.

ActiveMQ Artemis: 2 node cluster not working with a queue

I have a very simple setup with 2 nodes (connected to each other).
I have a unit test that produces 10 messages on a queue, then consumes all the messages from the queue and then check that it received 10 messages.
Here is the producer and consumer setup
<bean id="queueListener" class="SessionAwareMessageListener<TextMessage>" />
<jms:listener-container container-type="default" destination-type="queue" connection-factory="brokerPooledConnectionFactory" acknowledge="auto">
<jms:listener destination="queue.private.mb.sanity.V4" ref="queueListener" />
</jms:listener-container>
<bean id="queuePublisher" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="brokerPooledConnectionFactory" />
<property name="defaultDestinationName" value="queue.private.mb.sanity.V4" />
</bean>
And the connection setup
<amq:connectionFactory
brokerURL="failover:(tcp://${broker1.host}:${broker1.port},tcp://${broker2.host}:${broker2.port})?maxReconnectAttempts=1&startupMaxReconnectAttempts=1"
closeTimeout="100" id="brokerConnectionFactory" />
<!-- Pools of connections -->
<bean id="brokerPooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" init-method="start" destroy-method="stop">
<property name="maxConnections" value="5" />
<property name="idleTimeout" value="0" />
<property name="connectionFactory" ref="brokerConnectionFactory" />
<property name="maximumActiveSessionPerConnection" value="100" />
<property name="timeBetweenExpirationCheckMillis" value="60000" />
<property name="expiryTimeout" value="600000" />
</bean>
What I see is that message are dispatched on the 2 nodes.
Message received on the 2nd node are all consumed.
Some messages received on the 1st node are routed to the 2nd node and consumed from the 2nd node.
The other messages received on the 1st node are neither routed nor consumed.
If I launch a consumer connecting directly to the 1st node, it is able to consume the messages.
The same test with a topic instead of a queue works fine.
Any idea why I'm facing that behavior?
Thanks
Nicolas
Indeed, it was a setup issue. The message-load-balancing was set to STRICT instead of ON_DEMAND.

Spring Integration: Mail error and HTTP gateway response

My use case is simple. I want to handle an exception caused by a system being unreachable, perform a retry based upon a configured retry policy, send an email when the retry threshold has been met, and return a custom response back the caller.
The challenges I am facing is that I cannot both send an email and return a response back to the caller. Since I was using a int-mail:outbound-channel-adapter initially, I would expect this behavior since this is a one-way component:
<int:chain input-channel="defaultErrorChannel">
<int:service-activator id="mailMessageActivator" expression="#mailHandler.process(payload)" />
<int-mail:outbound-channel-adapter mail-sender="mailSender" />
</int:chain>
However, if I introduce a int-amqp:outbound-gateway in front of the int-mail:outbound-channel-adapter (see the Error Handling config below), I would expect to be able to invoke a int:service-activator to construct and return a response to the caller.
Am I thinking about this the wrong way? I see that someone else had a similar question which is still unanswered. Both of the configurations I mentioned send emails, but always block from the caller without receiving a response upon timeout.
Here are the relevant parts of my configuration:
Gateway
<int:gateway id="customerGateway" service-interface="com.uscs.crm.integration.CustomerGateway"
default-request-channel="syncCustomers" default-reply-channel="replySyncCustomers" default-reply-timeout="30000">
</int:gateway>
<int:object-to-json-transformer input-channel="syncCustomers" output-channel="outboundRequestChannel" />
<int-http:outbound-gateway request-channel="outboundRequestChannel" reply-channel="replySyncCustomers"
url="http://voorhees148.uscold.com:9595/web/customerSync/createCustomer"
http-method="POST"
rest-template="restTemplate"
expected-response-type="com.uscs.crm.model.CustSyncResponseVO"
mapped-request-headers="Authorization, HTTP_REQUEST_HEADERS">
<int-http:request-handler-advice-chain>
<ref bean="retryWithBackoffAdviceSession" />
</int-http:request-handler-advice-chain>
</int-http:outbound-gateway>
Error Handling
<int:channel id="defaultErrorChannel"/>
<int:channel id="errorResponses"/>
<!--
ExponentialBackOffPolicy.multipler is applied to wait time over each retry attempt
with a ExponentialBackOffPolicy.maximum configured.
-->
<bean id="retryWithBackoffAdviceSession" class="org.springframework.integration.handler.advice.RequestHandlerRetryAdvice">
<property name="retryTemplate">
<bean class="org.springframework.retry.support.RetryTemplate">
<property name="backOffPolicy">
<bean class="org.springframework.retry.backoff.ExponentialBackOffPolicy">
<property name="initialInterval" value="2000" />
<property name="multiplier" value="2" />
<property name="maxInterval" value="30000"/>
</bean>
</property>
<property name="retryPolicy">
<bean class="org.springframework.retry.policy.SimpleRetryPolicy">
<property name="maxAttempts" value="3"/>
</bean>
</property>
</bean>
</property>
<property name="recoveryCallback">
<bean class="org.springframework.integration.handler.advice.ErrorMessageSendingRecoverer">
<constructor-arg ref="defaultErrorChannel"/>
</bean>
</property>
</bean>
<bean id="custSyncResponseHandler" class="com.uscs.crm.integration.handler.CustSyncResponseHandler"></bean>
<int:chain input-channel="defaultErrorChannel" output-channel="replySyncCustomers">
<int:service-activator id="mailMessageActivator" expression="#mailHandler.process(payload)" />
<int:header-enricher>
<int:header name="ERROR_ID" expression="T(java.lang.System).currentTimeMillis()"/>
</int:header-enricher>
<int-amqp:outbound-gateway
exchange-name="error-responses-exchange"
routing-key-expression="'error.response.'+headers.ERROR_ID"
amqp-template="amqpTemplate" />
<!-- Will this service-activator return a response to the caller (int:gateway) using channel `replySyncCustomers`? -->
<int:service-activator id="custSyncResponseActivator" expression="#custSyncResponseHandler.process(payload)" />
</int:chain>
<int-amqp:inbound-gateway queue-names="error-responses" request-channel="errorResponses"
connection-factory="rabbitConnectionFactory" acknowledge-mode="AUTO" />
<int-mail:outbound-channel-adapter channel="errorResponses" mail-sender="mailSender" />
<!-- (Outbound Channel Adapter/Gateway) rabbit exchanges, queues, and bindings used by this app -->
<rabbit:topic-exchange name="error-responses-exchange" auto-delete="false" durable="true">
<rabbit:bindings>
<rabbit:binding queue="error-responses" pattern="error.response.*"/>
</rabbit:bindings>
</rabbit:topic-exchange>
<rabbit:queue name="error-responses" auto-delete="false" durable="true"/>
SOLUTION: I was able to get this working with help from #Artem. Below are the changes I made.
Service Activator Implementation (handling ErrorMessage)
The key is the line which returns the reconstructed Message with all of the header information from the ErrorMessage.
#Override
public Message<CustSyncResponseVO> process(Message<MessagingException> errorMessage) {
MessagingException errorException = errorMessage.getPayload();
CustSyncResponseVO custSyncResponse = new CustSyncResponseVO();
custSyncResponse.setResponseMessage(ExceptionUtils
.convertToString(errorMessage.getPayload()));
return MessageBuilder.withPayload(custSyncResponse)
.copyHeaders(errorMessage.getHeaders())
.copyHeadersIfAbsent(errorException.getFailedMessage().getHeaders()).build();
}
Service Activator Config
Used SpEL to reference the #root context to retrieve the ErrorMessage instead of the default which would be MessagingException (payload) and passed it to my process method on the POJO.
<bean id="custSyncResponseHandler" class="com.uscs.crm.integration.handler.CustSyncResponseHandler" />
<int:chain id="errorGatewayResponseChain" input-channel="defaultErrorChannel" output-channel="replySyncCustomers">
<int:service-activator id="custSyncResponseActivator" expression="#custSyncResponseHandler.process(#root)" />
</int:chain>
I don't see reason to introduce the AMQP middleware complexity there just for sending email in the end.
What only you need is <publish-subscribe-channel id="defaultErrorChannel"> with to endpoints as subscribers to it.
The first one is one-way email sending <chain> and the second one is custSyncResponseActivator to reply something to your <int-http:outbound-gateway>.
You can find more info on the matter in the Spring Integration Reference Manual.

Camel JMS CXF endpoints doesn't create new temp reply queue

I have two CXF endpoints which use JMS as the transport; one is used as a consumer and the second as a producer. Here is a very trimmed down setup.
<camelcxf:cxfEndpoint xmlns:i="http://inbound.com/inbound"
id="myInboundEndpoint"
endpointName="i:InboundService"
serviceName="i:InboundService"
address="camel://direct:my-inbound-route"
serviceClass="com.InboundService"
bus="cxf"
wsdlURL="classpath:META-INF/wsdl/inbound.wsdl">
<camelcxf:properties>
<entry key="dataFormat" value="POJO"/>
</camelcxf:properties>
</camelcxf:cxfEndpoint>
<camelcxf:cxfEndpoint xmlns:o="http://outbound.com/outbound"
id="myOutboundEndpoint"
endpointName="o:OutboundService"
serviceName="o:OutboundService"
address=""jms://""
serviceClass="com.OutboundService"
bus="cxf"
wsdlURL="classpath:META-INF/wsdl/outbound.wsdl">
<camelcxf:properties>
<entry key="dataFormat" value="POJO"/>
</camelcxf:properties>
<camelcxf:features>
<bean class="org.apache.cxf.transport.jms.JMSConfigFeature">
<property name="jmsConfig" ref="jmsConfig" />
</bean>
</camelcxf:features>
</camelcxf:cxfEndpoint>
<bean id="jmsConfig" class="org.apache.cxf.transport.jms.JMSConfiguration">
<property name="connectionFactory" ref="pooledConnectionFactory" />
<property name="targetDestination" value="some-queue" />
</bean>
<camelContext>
<route id="inQueue">
<from uri="activemq:inbound-queue" />
<to uri="direct:my-inbound-route" />
</route>
<route id="inVm">
<from uri="direct:in-vm" />
<to uri="direct:my-inbound-route" />
</route>
<route id="serviceProxy">
<from uri="cxf:bean:myInboundEndpoint?synchronous=true" />
<setHeader headerName="operationName"><constant>myOtherOperation</constant></setHeader>
<to uri="cxf:bean:outboundEndpoint?synchronous=true" />
</route>
</camelContext>
But what happens when the second route is called is that CXF component or camel tries to re-use all the JMS config from the original inbound message including the reply queue rather than creating another temp reply queue just for this exchange. This seems to be taken of the headers from the in message.
If you just use pure JMS and take CXF out of the equation then camel correctly creates a new queue for the inner part of the route although I need to continue to use CXF as there are some legacy interceptors I'm bound to use.
I have tried both using the jms:// + JMSConfig style as well as the camel:// style.
I am currently using the jaxws:client approach and just referencing using bean:myBean?method=myMethod which works but doesn't allow me to propagate SOAP headers from the original inbound method hence switching to use cxf:endpoint instead.
I have tried to find an example of someone using SOAP over JMS using CXF and there seems to be no concrete examples.
So the question is.... is there any additional configuration I need to do for my producer or is there some otherway I can do SOAP over JMS using CXF and propagate/set some headers from the original message/camel exchange?
I think you may need to filter the header of replyTo from the inMessage.
If you want to use SOAP over JMS, you can specify all the JMS related setting on the address without hacking the JMSConfiguration. Here is the document that you can take as an example.

Bayeux Server Configuration Issue

We had an issue with our CometD/Gigaspaces application creating a duplicate instances of the Bayeux Server. See my previous question posted here.
After investigating this issue with Gigaspaces, it turns out each bean defined in our Application Context File was getting created twice as
GigaSpaces has special treatment for Application Context Files called PU.XML. We've resolved this issue by renaming the PU.XML File but the
problem we have now is that we're not receiving any data on the client side and receive the following error "NetworkError: 400 Unknown Bayeux Transport - http://localhost:9292/cometd".
Previously, when the application created a duplicate instance of the Bayeux Server, we put a workaround in place to terminate the first
instance of the thread that the Bayeux Server was running on and as a result we were able to publish data on our channels using Web Sockets which we configured in the
Application Context File.
Could you have a look at our current configuration and let me know if there is a alternative solution to configure and export the Bayeux Server correctly using Spring? Is it possible the Bayeux bean is not getting exported correctly or if it is getting exported too late??
I've posted our updated Web.XML and Application Context configurations below. The CometD Version/Jars in our POM.XML are the same as my previous post. If you need further info. please let me know.
Current Web.XMl:
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
id="WebApp_ID" version="2.5">
<display-name>CometDApplication</display-name>
<servlet>
<servlet-name>cometd</servlet-name>
<servlet-class>org.cometd.server.CometdServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>cometd</servlet-name>
<url-pattern>/cometd/*</url-pattern>
</servlet-mapping>
<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>
<!-- <listener>
<listener-class>org.openspaces.pu.container.jee.context.ProcessingUnitContextLoaderListener</listener-class>
</listener>-->
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/applicationContext-gigaspaces.xml</param-value>
</context-param>
</web-app>
Current applicationContext-gigaspaces.XML:
<bean id="Bayeux" class="org.cometd.server.BayeuxServerImpl"
init-method="start" destroy-method="stop">
<property name="options">
<map>
<entry key="logLevel" value="0" />
<entry key="timeout" value="15000" />
</map>
</property>
<property name="transports">
<list>
<!-- The order of the following transports dictates the type of transport
used i.e. Web Sockets then JsonTransport (a.k.a long-polling) -->
<bean id="websocketTransport" class="org.cometd.websocket.server.WebSocketTransport">
<constructor-arg ref="Bayeux" />
</bean>
<bean id="jsonTransport" class="org.cometd.server.transport.JSONTransport">
<constructor-arg ref="Bayeux" />
</bean>
<bean id="jsonpTransport" class="org.cometd.server.transport.JSONPTransport">
<constructor-arg ref="Bayeux" />
</bean>
</list>
</property>
</bean>
<!-- Export the Bayeux Server to the servlet context via springs ServletContextAttributeExporter -->
<bean id="ContextExporter"
class="org.springframework.web.context.support.ServletContextAttributeExporter">
<property name="attributes">
<map>
<entry key="org.cometd.bayeux">
<ref local="Bayeux" />
</entry>
</map>
</property>
</bean>
The code you posted is correct and virtually identical to the test present in CometD, see here and here.
You have something else going on, and debug logs on both client and server will help you understanding.