I have a JMS topic in Wildfly 14 embedded Artemis Broker. Now I want to define max-delivery-attempts and redelivery-delay. But I don't want to do it per Topic, but per Client, which is one or more MessageDrivenBean in an EAR-packaged application on the same server instance.
I am aware of the possibility to define address-setting in standalone.xml. But this is just possible per one ore more topic depending on wildcards.
Do I have any chance to define "max-delivery-attempts" and "redelivery-delay" per MDB which are listening to my topic?
The client implementation underlying the MDB doesn't support it's own max-delivery-attempts or redelivery-delay logic. That functionality is implemented on the broker. Putting that functionality in the client wouldn't make a lot of sense as the broker supports a number of different standard protocols (e.g. AMQP, STOMP, MQTT) with implementations across many different languages and platforms. Putting the redelivery configuration on the broker is the only way to get consistent behavior across all those different clients.
Related
I am trying to figure out how to utilize ActiveMQ Artemis to achieve the following topology. I do need to have several producers writing to queues hosted on two standalone Artemis brokers. For the moment every producer creates two connection factories which handle the connections to the 2 brokers and create the corresponding queues.
#Bean
public ConnectionFactory jmsConnectionFactoryBroker1() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerUrl_1,username,password);
return connectionFactory;
}
#Bean
public ConnectionFactory jmsConnectionFactoryBroker2() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerUrl_2,username,password);
return connectionFactory;
}
My main issue is that I need to know which queue is assigned to which broker and at the same time I need to know that if one broker is down for some reason that I can re-create that queue to the other broker on the fly and avoid losing any further messages. So my approach was to setup broker urls as below
artemis.brokerUrl_1=(tcp://myhost1:61616,tcp://myhost2:61616)?randomize=false
artemis.brokerUrl_2=(tcp://myhost2:61616,tcp://myhost1:61616)?randomize=false
So using a different JmsTemplate for each broker url my intention was that when referring to JmsTemplate
using brokerUrl_1 would create the queues on myhost1, and the same for the corresponding JmsTemplate
for brokerUrl_2.
I would have expected (due to randomize parameter) that each queue would have some kind of static membership to a broker and in the case of a broker's failure there would be some kind of migration by re-creating the queue from scratch to the other broker.
Instead what I notice that almost every time the distribution of queue creation does not happen as perceived but rather randomly since the same queue can appear in either broker which is not a desirable
for my use-case.
How can I approach this case and solve my problem in a way that I can create my queues on a predefined broker and have the fail-safe that if one broker is down the producer will create the same queue to the
other broker and continue?
Note that having shared state between the brokers is not an option
The randomize=false doesn't apply to the Artemis core JMS client. It only applies to the OpenWire JMS client distributed with ActiveMQ 5.x. Which connector is selected from the URL is determined by the connection load-balancing policy as discussed in the documentation. The default connection load-balancing policy is org.apache.activemq.artemis.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy which will select a random connector from the URL list and then round-robin connections after that. There are other policies available, and if none of them give you the behavior you want then you can potentially implement your own.
That said, it sounds like what you really want/need is 2 pairs of brokers where each pair consists of a live and a backup. That way if the live broker fails then all the clients can fail-over to the backup and you won't have to deal with any of this other complexity of this "fake" fail-over functionality you're trying to implement.
Also, since you're using Spring's JmsTemplate you should be aware of some well-known anti-patterns that it uses which may significantly impact performance in a negative way.
I am trying to use Kafka as a request-response system between two clients much like RabbitMQ and I was wondering if it is possible to set the expiration of a message so that after it is posted it will automatically get deleted from the Kafka servers.
I'm trying to do it on a per message level as well (but even if it were per-topic it is okay, but I'd like to use the same template if possible).
I was checking ProducerRecord, but all it had was timestamp. I also don't see any mention of it in KafkaHeaders
Kafka records are deleted in segments (a group of messages) based on overall topic retention.
Spring is just a client. It doesn't control the server side logic of the log cleaner.
I'm designing a system where I have to integrate with multiple Message Queues (MQ) based backends. I have one microservice for each backend for processing MQ payloads. I have chosen Kafka as the medium of messaging and considering Kafka-MQ-Connects for MQ integration.
I can think of two approaches to integration.
Kafka-MQ-Connect (Source/ Sink) connect per backend + Kafka topic (to/ from) per backend.
Pros.
- Can extend to new backends without touching the existing connectors.
Cons.
- Too many connectors and topics to maintain.
Single Kafka-MQ-Connect (Source/ Sink) + Single Kafka topic (to/ from) for all the backends. Additionally, the Sink connects do dynamic routing to MQs and the microservices will have built-in Message-Filters to filter only relevant messages.
Pros.
- Few topics and connectors to maintain.
Cons.
- Addition of new MQ backends would require connector changes.
What would be the better approach? Are there any other integration alternatives apart from the above?
Although you haven't provided any further requirements (for example, how frequently are you planning to add new data sources and that traffic do you have), I would pick the first approach. It will be much easier in the future to add/remove new/existing data sources.
And I wouldn't say that it is hard to maintain multiple sink/source connectors and topics. From my experience, it is harder to maintain connectors which are replicating data from multiple topics/sources. For example, if you want to apply SMT (Simple Message Transform) on a particular topic, you won't be able to achieve it if you don't have isolated connectors as SMTs are applied on a connector level. Furthermore, if you configure a single connector for all of your sources and at some point it fails, all of your target systems will encounter downtime.
Kafka does not have any concept of filters at the brokers that can ensure – messages that are being picked up by a consumer matches some criteria. The filtering has to happen at the consumers (or applications). - So in this case there is an increase in the processing time at the client/consumer.
In the case of JMS – if your messaging application needs to filter the messages it receives, you can use a JMS API message selector, which allows a message consumer to specify the messages it is interested in. Message selectors assign the work of filtering messages to the JMS provider rather than to the application. - So in this case there is an increase in the processing time at the JMS provider
Which of the above two proves to be better in terms of keeping the code clean and improving performance too?
I don't think your question can be answered conclusively as the performance will ultimately depend on the actual implementation (i.e. which specific JMS provider is used) as well as the particular use-case (e.g. details like the number of clients, the message volume, how many messages match the filters, network speed, etc.).
I was reading about Apache Kafka, and I came across it's concept of consumer groups. What I don't understand is its use case. Two different consumers from different groups may read the same message being published. Why would one want to process the same message by two different consumers? Can someone give a practical use case?
You want to write the data to MySQL and to Elastic Search and you have an application that reads events and flags some as "errors".
Each one of these use-cases is a separate application that will want to see all the events so they will be separate consumer groups and each will see all messages.
This is actually the most typical scenario in Kafka: an application produces a message and you have two different systems creating two different views on that data (i.e. indexing it in ES and caching it in Redis).
Before Kafka it was common to have your app dual-writing its data into both apps, with all the problems dual writes carry in terms of consistency.
With Kafka you can spin off as many consumer systems in the form of groups and also have parallelisation and fault tolerant having multiple partitions and consumer instances within the group.