In Spring Cloud Stream you can declare a dead letter queue for "input1" binding with:
spring.cloud.stream.rabbit.bindings.input1.consumer.auto-bind-dlq=true
If you have n bindings, you have to include n lines with this in application.properties file and that is a bit repetitive.
I want to declare a dead letter queue for all my bindings, something like:
spring.cloud.stream.rabbit.bindings.default.consumer.auto-bind-dlq=true
Is it possible with properties? Is there any way using #Configuration?
Thanks!
So, you need to make sure that you use boot 2.1.x since there was a significant improvement in boot with regard to property merge and we are the consumers of that improvement.
Also, the correct property name should be spring.cloud.stream.rabbit.default...
For example, here is the working configuration:
spring.cloud.stream.default.group=myGroup
spring.cloud.stream.bindings.input1.destination=myDestination
spring.cloud.stream.rabbit.default.consumer.auto-bind-dlq=true
spring.cloud.stream.rabbit.default.consumer.dead-letter-queue-name=myDlx
We probably need to clarify this a bit more in documentation
Related
I have referred this. but, this is an old post so i'm looking for a better solution if any.
I have an input topic that contains 'userActivity' data. Now I wish to gather different analytics based on userInterest, userSubscribedGroup, userChoice, etc... produced to distinct output topics from the same Kafka-streams-application.
Could you help me achieve this... ps: This my first time using Kafka-streams so I'm unaware of any other alternatives.
edit:
It's possible that One record matches multiple criteria, in which case the same record should go into those output topics as well.
if(record1 matches criteria1) then... output to topic1;
if(record1 matches criteria2) then ... output to topic2;
and so on.
note: i'm not looking elseIf kind of solution.
For dynamically choosing which topic to send to at runtime based on each record's key-value pairs. Apache Kafka version 2.0 or later introduced a feature called: Dynamic routing
And this is an example of it: https://kafka-tutorials.confluent.io/dynamic-output-topic/confluent.html
I still cannot wrap my head around how is kafka producers / consumers and schema registry intended to reuse KafkaProperties. Or is it not intended to reuse same structures?
so for schema registry, I have to configure for example following properties:
spring.kafka.basic.auth.credentials.source
spring.kafka.basic.auth.user.info
spring.kafka.producer.properties.schema.registry.url
spring.kafka.consumer.properties.schema.registry.url
but if I do so, and call org.springframework.boot.autoconfigure.kafka.KafkaProperties#buildConsumerProperties
and proceed with build hashmap I will get for example warning:
The configuration 'schema.registry.url' was supplied but isn't a known config.
I saw recommendation to set schema registry urls as such, I also saw setting basic.auth… like this. I really cannot get it working. I mean the app is working, I just get several pages of these warnings. I'd like to know how to configure the app correctly, as it was intended by design, so that I can share 1 configuration for confluent kafka and schema-registry configuration
Sure I can get separate configuration of properties "to be added" for schema registry, or bend it somehow, so that I can build separate set of properties for both, but it just don't feel right, this clearly isn't how it was (I hope) designed. What is the correct procedure here? Maybe it's hidden somewhere in depth of autoconfiguration, but if its there I cannot find it.
I answered a similar question on Gitter earlier today; it comes up a lot:
https://gitter.im/spring-projects/spring-kafka?at=60d490459cf3171730f0d2d4
It's rather unfortunate that Kafka supports configuring (de)serializers via the producer/consumer configs, but then emits these annoying (ignorable) log messages.
If it bothers you, I suggest you open a JIRA against the kafka-clients.
I've seen this (2010) and this (SO, 2012), but still have not got the answer I need...
Is there an option in Spring Batch to have a dynamic composite reader/processor/writer?
The idea is to have the ability to replace processor at runtime, and in case of multiple processors (AKA composite-processor), to have the option to add/remove/replace/change order of processors. As mentioned, same for reader/writer.
I thought of something like reading the processors list from DB (using cache?) and there the items (beans' names) can be changed. Does this make sense?
EDIT - why do I need this?
There are cases that I use processors as "filters", and it may occur that the business (the client) may change the requirements (yes, it is very annoying) and ask to switch among filters (change the priority).
Other use case is having multiple readers to get the data from different data warehouse, and again - the client changes the warehouse from time to time (integration phase), and I do not want my app to be restarted each and every time. There are many other use cases, of course. plus this.
Thanks
I've started working on this project:
https://github.com/OhadR/spring-batch-dynamic-composite
that implements the requirements in the question above. If someone wanna contribute - feel free!
For a microservice I need the functionality to persist state (changes). Essentially, the following happens:
case class Item(i: Int)
val item1 = Item(0)
val item2 = exec(item1)
Where exec is user defined and hence not known in advance. As an example, let's assume this implementation:
def exec(item: Item) = item.copy(i = item.i + 1)
After each call to exec, I want log the state changes (here: item.i: 0->1) so that...
there is a history (e.g. list of tuples like (timestamp, what has changed, old value, new value))
state changes and snapshots could be persisted efficiently to a local file systems and sent to a journal
Arbitrary consumers (not only the specific producer where the changes originated) could be restored from the journal/snapshots
As less dependencies to libraries and infrastructure as possible (it is a small project, complex infrastructure/server installations & maintenance is not possible)
I know that the EventStore DB is probably the best solution, however, in the given environment (a huge enterprise with a lot of policies), it is not possible for me to install & run it. The only infrastructural options are a RDBMS or Kafka. I'd like to go with Kafka as it seems to be the natural fit in this event sourcing use case.
I also noticed that Akka Persistence seems to handle all of the requirements well. But I have a couple of questions:
Are there any alternatives I missed?
Akka Persistence's Kafka integration is only available through a community plugin that is not maintained regularly. Seems to me, as this is not a common use case. Is there any reason the outlined architecture is not wide spread?
Is cloning possible? In the Akka documentation it says:
"So, if two different entities share the same persistenceId,
message-replaying behavior is corrupted."
So, let's assume two application instances, one and two, both have unique persistenceIds. Could two be restored (cloned) from one's journal? Even if they don't share the same Id (which is not allowed)?
Are there any complete examples of this architecture available?
While playing with azure service fabric actors, here is the weird thing I've recently found out about - I can't change the default settings for partitioning. If I try to, say, set Named partitioning or change low/high key for UniformInt64, it gets overwritten each time I build my project in Visual Studio. There is no problem to do this for statefull service, it only happens with actors. No errors, no records in Event Log, no nothing... I've found just one single reference about the same problem on the Internet -
https://social.msdn.microsoft.com/Forums/vstudio/en-US/4edbf0a3-307b-489f-b936-43af9a365a0a/applicationmanifestxml-overwritten-on-each-build?forum=AzureServiceFabric
But I haven't seen any explanations to that - neither on MSDN, nor in official documentation. Any ideas? Would it really be 'by design'?
P.S.
Executing just Powershell script to deploy the app does allow me to set the scheme the way I want it to. Still it's frustrating to not being able to do this in VS. Probably there is a good reason to that... it should be, right? :)
Reliable Services can be created with different partition schemes and
partition key ranges. The Actor Service uses the Int64 partitioning
scheme with the full Int64 key range to map actors to partitions.
Every ActorId is hashed to an Int64, which is why the actor service
must use an Int64 partitioning scheme with the full Int64 key range.
However, custom ID values can be used for an ActorID, including GUIDs,
strings, and Int64s.
When using GUIDs and strings, the values are hashed to an Int64.
However, when explicitly providing an Int64 to an ActorId, the Int64
will map directly to a partition without further hashing. This can be
used to control which partition actors are placed in.
(Source)
This ActorId => PartitionKey translation strategy doesn't work if your partitions are named.