Howto set routing key for producer - spring-cloud

I'm having following testing scenario for Spring Cloud Stream based application.
I'm having one topic for my application with two queues. BindingKey for first queue is named "cities", for the second queue is the binding key "persons".
Please howto set routing key for the Spring Cloud Stream Rabbit producer??? To distinquish where the message will be consumed?
This is my binding configuration:
spring.config.name=streaming
spring.cloud.stream.bindings.citiesChannel.destination=streamInput
spring.cloud.stream.bindings.citiesChannel.group=cities
spring.cloud.stream.rabbit.bindings.citiesChannel.consumer.durableSubscription=true
spring.cloud.stream.rabbit.bindings.citiesChannel.consumer.bindingRoutingKey=cities
spring.cloud.stream.bindings.personsChannel.destination=streamInput
spring.cloud.stream.bindings.personsChannel.group=persons
spring.cloud.stream.rabbit.bindings.personsChannel.consumer.durableSubscription=true
spring.cloud.stream.rabbit.bindings.personsChannel.consumer.bindingRoutingKey=persons
spring.cloud.stream.bindings.producingChannel.destination=streamInput
The only way howto distinguish where the message will be send(cities or persons queue) when publishing into producingChannel is through "spring.cloud.stream.bindings.producingChannel.producer.requiredGroups" property, but this is higly unusable. Because I don't wanna know anything about the queue where my message is going to land...This is AMPQ antipattern.
I want nothing simplier then just have similar functionality like through RabbitTemplate.setRoutingKey(String routingKey) method when publishing into producingChannel...:-(

Use routingKeyExpression on the producer side - see the documentation.
Since it's an expression, you'll need quotes: 'cities' or if the same producer sends to both, something like headers['whereToSendHeader'].

For those of us using yaml:
spring:
cloud:
stream:
rabbit:
bindings:
somechannel:
producer:
bindingRoutingKey: routingKey
routing-key-expression: '"routingKey"'
Source
Note the bindingRoutingKey above - this is used if you expect your queue to be bound at producer startup when using spring.cloud.stream.bindings.someChannel.producer.requiredGroups

Yes, thanks a lot.
Adding
spring.cloud.stream.rabbit.bindings.producingChannel.producer.routingKeyExpression='persons'
causes messages to land in a streaming.persons queue and
spring.cloud.stream.rabbit.bindings.producingChannel.producer.routingKeyExpression='cities'
in the streaming.cities queue. Exactly what I wanted.
Thank you. Seems that we're going to use Spring Cloud Stream in the project afterall..:-)

Another (AMQP-agnostic) way of doing this is by using the dynamic destination support, i.e. http://docs.spring.io/autorepo/docs/spring-cloud-stream-docs/Chelsea.SR2/reference/htmlsingle/#dynamicdestination
The main difference in the case of Rabbit is that you'll end up having 2 separate exchanges ('cities' and 'persons') - so it does not leverage the routing support there, but it is portable to other messaging systems such as Kafka for example.

Related

Why Akka HTTP close user connection, when multiple messages is produced?

I have a simple WebSocket application, which is based on Akka HTTP/Reactive streams, like this https://github.com/calvinlfer/akka-http-streaming-response-examples/blob/master/src/main/scala/com/experiments/calvin/ws/WebSocketRoutes.scala#L82.
In other words, I have Sink, Source (which is produced from Publisher), and the Flow:
Flow.fromSinkAndSource(incomingMessages, outgoingMessages)
When I produce more, than 30 messages per second to the client, Akka closes a connection.
I cannot understand, where is a setting, which configure this behaviour. I know about OverflowStrategy, but I don't explicitly configure it.
It seems, that I have OverflowStrategy.fail(), or my problem looks like it.
You can tune Internal buffers.
There are two ways, how to do it:
1) application.conf:
akka.stream.materializer.max-input-buffer-size = 1024
2) You can configure it explicitly for your Flow:
Flow.fromSinkAndSource(incomingMessages, outgoingMessages)
.addAttributes(Attributes.inputBuffer(initial = 1, max = 1024))

restrict event notifier for multicast eip

Is it possible to know whether any Eip has been used in my context, a unique id or something like that..!
When exchange is copied , for each copies Event is notified, is there any possibility to restrict it to notify once..
Ref: Used and tested with the Apache Camel "MultiCast" eip
Solution:
1: Eip - Multicast will be available in the CamelExchange properties... as CamelMulticastIndex.
2: Eip - WireTap, will be available in Camel MessageHistory(InOnly) pattern...
Hope it helps in solving ...
You can assign id's to any processor or endpoint in camel. Would that help ?

hornetq message remain in queue after ack

We are using hornetq-core 2.2.21.Final stand-alone after reading a non-transnational message , the message still remains in queue although it acknowledge
session is created using
sessionFactory.createSession(true, true, 0)
locator setting:
val transConf = new TransportConfiguration(classOf[NettyConnectorFactory].getName,map)
val locator = HornetQClient.createServerLocatorWithoutHA(transConf)
locator.setBlockOnDurableSend(false)
locator.setBlockOnNonDurableSend(false)
locator.setAckBatchSize(0) // also tried without this setting
locator.setConsumerWindowSize(0)// also tried without this setting
Message is acknowledge using message.acknowledge ()
I think that the problem might be two queues on the same address
also tried to set the message expiration but it didn't help , messages are still piling up in the queue
please advise
It seems you are using the core api. Are you explicitly calling acknowledge on the messages?
If you have two queues on the same address ack will only ack the messages on the queue you are consuming. On that case the system is acting normally.

Apache Camel throttling with a SOAP endpoint -> TransformerException

We have an Apache Camel application providing SOAP service. The "initial route" starts from a Apache CFX -provided endpoint.
We need a simple mechanism to prevent the messages from being handled "too fast" (and don't have massive scalability needs).
Thus we ended up trying Throttler. Now, the problem is that after adding throttle to our route, something goes wrong.
The initial route, somewhat cleaned:
from("cxf:bean:sapEndpoint").routeId(SOAP_ENDPOINT)
.throttle(1)
.onException(Exception.class)
.to("direct:emailFaultNotification").handled(false)
.end()
.transacted(joinJpaTx)
.to(xsltRemoveEmptyElements) // Cleaning done with XSLT endpoint
.to("direct:inboundWorkOrderXml"); // Forward to actual processing
// direct:inboundWorkOrderXml contains various validation, persistance & response
Error in our log:
2013-02-18 16:50:16,257 [tp1636587648-50] ERROR DefaultErrorHandler - Failed delivery for exchangeId: ID-...-4. Exhausted after delivery attempt: 1 caught: javax.xml.transform.TransformerException: javax.xml.transform.TransformerException: com.sun.org.apache.xml.internal.utils.WrappedRuntimeException: Content is not allowed in prolog.
javax.xml.transform.TransformerException: javax.xml.transform.TransformerException: com.sun.org.apache.xml.internal.utils.WrappedRuntimeException: Content is not allowed in prolog.
at com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:735)[:1.6.0_37]
at com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:336)[:1.6.0_37]
at org.apache.camel.builder.xml.XsltBuilder.process(XsltBuilder.java:98)[camel-core-2.7.0.jar:2.7.0]
at org.apache.camel.impl.ProcessorEndpoint.onExchange(ProcessorEndpoint.java:102)[camel-core-2.7.0.jar:2.7.0]
at org.apache.camel.impl.ProcessorEndpoint$1.process(ProcessorEndpoint.java:72)[camel-core-2.7.0.jar:2.7.0]
...
I suppose that throttler doesn't work straight the way I supposed.
It seems that with throttling enabled, the XSLT endpoint receives empty or invalid XML? Without throttle definition everything works fine. With short try, the message body still seems to contain XML string?
Some ideas?
When using Camel error handling for redelivery then mind about streaming payloads. See about stream-caching at: http://camel.apache.org/stream-caching.html
There is also a tip in the top of the Camel CXF documentation page at: http://camel.apache.org/cxf about this
Finally the solution was more simple than I thought. Instead of using throttle in the route starting from "cxf:bean:sapEndpoint", I added throttle to route handling "direct:inboundWorkOrderXml".
Don't know the exact reason, could be somehow related to that some parts of throttle functionality might vary related on the from-endpoint of the route. (So problem with cxf-endpoint not encountered with direct-endpoint)

JMS QueueRequestor and deleted Destination

I'm using Activem MQ 5.3.1
My configuration is good for classical async messaging
I try to use a QueueRequestor
The message is effectively sended, recieved.
But when it's time to answer on the temp queue i've got this exception raised
javax.jms.InvalidDestinationException: Cannot publish to a deleted Destination: temp-queue://ID:......
the destination doesn't exist
I'm using the default conf for activemq
Any idea??
I just find my answer
the implementation of queuerequestor is made for send and recieve on the same jmsSession.
That's why the reciever of the requestor never seen any message, and why the temp destination could'n be use topublish message
My solution is to create a requestor with two session.
The actuall implmentation will be very similar to the one on the blog post above