Spring Cloud Stream Rabbit Binder Routing Key always '#' - spring-cloud

Version: Spring Boot: 1.4.2.RELEASE
Spring Cloud Deps: Brixton.SR7
Here is my application.properties of a processor app.
logging.level.=DEBUG
server.port=0
logging.file=traveller-events-processor.log
server.port=0
spring.cloud.stream.rabbit.bindings.input.consumer.bindingRoutingKey='aa'
spring.cloud.stream.rabbit.bindings.input.consumer.bindingRoutingKey=aa
spring.cloud.stream.rabbit.bindings.input.consumer.bindQueue=true
spring.cloud.stream.rabbit.bindings.input.consumer.routing-key='aa'
spring.cloud.stream.rabbit.bindings.input.consumer.routingKey='aa'
spring.cloud.stream.bindings.input.destination=events-exchange
spring.cloud.stream.bindings.input.group=eventconsumersgroup
spring.cloud.stream.bindings.output.destination=work.out
spring.cloud.stream.bindings.output.contentType=text/plain
spring.cloud.stream.bindings.output.binder=rabbit
spring.cloud.stream.bindings.output.group=traveller-events-output-group
When I start this app, events-exchange is created as expected and bound to a queue named: events-exchange.eventconsumersgroup (which is also ok). But the routingKey is always '#'. I've tried with all the options I have fished from various documentations. Am I missing something here?
I want this app to only subscribe to certain messages (which I want to achieve via the routing key).

I see that Brixton.SR7 uses 1.0.2.RELEASE of Spring Cloud Stream and I don't seem to find the routingKey as a Rabbit consumer property. Do you want to upgrade to Spring Cloud Camden release or the latest one so that you can try using the consumer property: bindingRoutingKey as mentioned here

Related

Spring batch integration using OutBoundGateway and ReplyingKafkaTemplate

My Goal
I need to read a file and divide each line as a message and send to kafka from a spring batch project and another spring integration project will be receiving the messages to process it in a async way. I want to return those messages after processing to the batch project and create 4 different files out of those messages.
Here I am trying to use OutBoundGateway and ReplyingKafkaTemplate. I am unable to configure it properly... Is there any example or reference guide to configure it.
I have checked spring batch integration samples github repository... There is no sample for outBoundGateway or ReplyingKafkaTemplate.
Thanks in Advance.
For ReplyingKafkaTemplate logic in Spring Integration there is a dedicated KafkaProducerMessageHandler which can be configured with a ReplyingKafkaTemplate.
See more info in docs:
https://docs.spring.io/spring-integration/docs/current/reference/html/kafka.html#kafka-outbound-gateway
And more about ReplyingKafkaTemplate:
https://docs.spring.io/spring-kafka/reference/html/#replying-template
Probably on the other a KafkaInboundGateway must be configured, respectively:
https://docs.spring.io/spring-integration/docs/current/reference/html/kafka.html#kafka-inbound-gateway

correlationId propagated to spring sleuth 1.x

I have the following setup:
Proxy (P) -- HTTP --> Spring Boot 2 app (X) -- HTTP --> Spring Boot 1 app (Y)
The proxy sends the requestId as an HTTP header which I need to include in the logs of both X and Y.
For the X app I could easily do it with the support of Spring Cloud Sleuth 2 using
spring:
sleuth:
propagation-keys: requestId
and creating a CurrentTraceContext implementation with inspiration from Slf4jCurrentTraceContext where I add
MDC.put("requestId", ExtraFieldPropagation.get(currentSpan, "requestId"));
and then I can easily add it to the logs using the following log pattern:
%d{yy-MM-dd E HH:mm:ss.SSS} %5p [component=${springAppName},requestId=%X{requestId:-}] %m%n"
But now I need to propagate the requestId also to Y app.
Unfortunately there I cannot leverage the goodies introduced in Spring Cloud Sleuth 2.0, (like TraceContext from brave library) since that is a Spring Boot 1.x app.
Wondering what are the options?
I was thinking to extend the Slf4jSpanLogger and inject into DefaultTracer but not sure how to get the requestId there is no TraceContext in SpanLogger.
requestId has to be there in the headers. You would have to modify the current logic of parsing the HTTP headers for Boot 1.x and retrieve that value from the headers and put it in the span.
The easiest way however would be to propagate that value as baggage cause baggage works out of the box for Boot 1.x. That way if we see the baggage- prefixed headers, Sleuth 1.3.x will automatically propagate it. Remember to whitelist the baggage in Boot 2.0.

SpringCloudStream - Slow Consumer for RabbitMQ binder

I have a usecase to send http POST to a HTTP Source created as Spring Boot APP with http Source of Spring Cloud Stream App Starter. This process in publishing 5k records/sec. I have a Sink Application to persist the data to Mongo DB. Read in the app is very slow of 20 msgs/sec. I am using the following properties and see no difference. I am using the same prefix to load properties - spring.cloud.stream.rabbit.binder. Can you please let me know how to achieve concurrency in the read of data from RabbitMQ binder?
application.properties
spring.cloud.stream.binder.rabbit.default.prefix=z.
spring.cloud.stream.bindings.input.destination=http-source
spring.cloud.stream.bindings.input.durableSubscription=true
spring.cloud.stream.bindings.input.group=default
spring.cloud.stream.rabbit.binder.addresses=localhost:5672
spring.cloud.stream.rabbit.binder.username=guest
spring.cloud.stream.rabbit.binder.password=guest
spring.cloud.stream.rabbit.binder.listener.concurrency=100
spring.cloud.stream.rabbit.binder.listener.max-concurrency=500
spring.cloud.stream.rabbit.binder.listener.prefetch=1000
spring.cloud.stream.rabbit.binder.listener.acknowledge-mode=NONE
server.port=${listen.port}
####################################################
# Mongo
# Configuration - DEV
####################################################
mongodbDatabasename=*****
mongodbPassword=*****
mongodbUsername=*****
mongodbReplicaName=
mongodbAddresses=localhost:27017
mongodbAuthenticationDatabase=users
mongodbAuthMechanism=SCRAM-SHA-1
region=DEV
collectionName=*****
mongodbSocketTimeout=25000
mongodbConnectionTimeout=5000
maxConnectionForHost=5
minConnectionForHost=100
Thanks and Appreciate your help
Karthik
I believe you need to set the concurrency and other consumer related properties as per-binding consumer properties (with the prefix: spring.cloud.stream.rabbit.bindings.<channelName>.consumer.. You can refer more detail here
Not sure how did you come up with the properties with the prefix spring.cloud.stream.rabbit.binder.listener.concurrency. Do you see this anywhere in the documentation?

spring cloud auto refresh config server property

I have configured spring cloud config which picks up property from Github. If I post to /refresh, I am also able to get the updated value in my application.
Now I want to get properties updated automatically. That means I don't want to hit refresh API to get the changes reflected in my application from Github property file to my application.
Do I need to implement Rabbitmq and cloud bus for it or there is any other simple way to do it?
Also there document says that we need to add a dependency on the spring-cloud-config-monitor library for push notification.
http://projects.spring.io/spring-cloud/spring-cloud.html#_push_notifications_and_spring_cloud_bus
But I did not find any such dependency in maven to be added. Not sure if my understanding is wrong. Please help.
You would need a Config server with Spring Cloud Bus and RabbitMQ (or Kafka or Redis) support.
RabbitMQ with the following exchange:
name: springCloudBus
type: topic
durable: true
autoDelete: false
internal: false
The config server would send data to the topic once it receives push events from Git (Github, Bitbucket, GitLab) via a webhook to http://<config-server>/monitor
And a client application with Config and RabbitMQ libraries, subscribed to the previous exchange to receive messages of the properties that need to be refreshed.
More could be found in my blog at: http://tech.asimio.net/2017/02/02/Refreshable-Configuration-using-Spring-Cloud-Config-Server-Spring-Cloud-Bus-RabbitMQ-and-Git.html with a brief explanation of the configuration, logs and full source code for the Config server and client app.
They are not generally available yet. You need to add http://repo.spring.io/milestone/ as a maven repository and use a milestone release.

Does Feign retry require some sort of configuration?

I just tried to do a attempted a seamless upgrade of a service in a test setup. The service is being accessed by a Feign client. And naively I was under the impression that with multiple instances available of the service, the client would retry another instance if it failed to connect to one.
That, however, did not happen. But I cannot find any mention of how Feign in Spring Cloud is supposed to be configured to do this? Although I have seen mentions of it supporting it (as opposed to using RestTemplate where you would use something like Spring Retry?)
If you are using ribbon you can set properties similar to the following (substituting "localapp" for your serviceid):
localapp.ribbon.MaxAutoRetries=5
localapp.ribbon.MaxAutoRetriesNextServer=5
localapp.ribbon.OkToRetryOnAllOperations=true
ps underneath Feign has a Retryer interface, which was made to support things like Ribbon.
https://github.com/Netflix/feign/blob/master/core/src/main/java/feign/Retryer.java
see if property works - OkToRetryOnAllOperations: true
You can refer application ->
https://github.com/spencergibb/spring-cloud-sandbox/blob/master/spring-cloud-sandbox-sample-frontend/src/main/resources/application.yml
Spencer was quick...was late by few minutes :-)