RabbitMQ Exclusive Queue Lost Messages - queue

For a consumer, when declaring a Queue to be 'exclusive', the Queue will be deleted when consumer disconnects per the documentation.
Assuming there are messages in the Queue awaiting processing and the consumer goes offline, all messages on this 'exclusive' Queue will be lost when Queue is removed.
Are there any strategies or ways to keep a Queue 'exclusive' but preserve the messages in the Queue/Broker so nothing is lost?
Thanks in advance.

Exclusive queue will be deleted when the channel created it disconnects.
What you probably want is exclusive consumer which can be done by setting exclusive parameter to true when consume from a queue. Exclusive consumer ensures that only one consumer can consume this queue .It excludes all other consumers from the queue once it is consumed.
In summary , to make a queue exclusive to one consumer and persist the messages in this queue, you should :
Declare the queue to be durable
When producer publishes message , the message 's delivery mode should set to persistent
Use exclusive consumer

Related

What does LockDuration means in Azure Service Bus

I have a consumer that has been subscribed to a queue in Azure Service Bus. From the consumer end, I have set LockDuration to 2min.
So, when a message arrives in the queue and the consumer picks that, does it means:
The consumer will be locked for 2min?
The consumer will not pick any new messages before 2min? Even if the consumer was able to process the message within seconds?
The message will not be picked by any other consumers for at-least 2min?
The lock duration is the amount of time that a consumer has exclusive access to a specific message without explicitly asking for more time (renewing the lock) or indicating that they are done (settling the message). If the amount of time indicated by the lock duration passes and the consumer hasn't renewed or settled, the lock expires, and Service Bus makes the message available for another consumer to read.
More information is available in: Message transfers, locks, and settlements.
To address your bullet points:
The consumer will be locked for 2min?
The consumer is not locked, the message is. The consumer is free to perform other operations, including receiving messages in parallel.
The consumer will not pick any new messages before 2min? Even if the consumer was able to process the message within seconds?
The consumer can continue to ask for more messages in parallel. If the consumer processes a message in seconds, it should complete/abandon/dead-letter the message which indicates to Service Bus that the consumer is done with the message.
The message will not be picked by any other consumers for at-least 2min?
This is true for the case where the consumer that holds the lock does not renew it or settle a message. More information is available in the link above.

what will happen to the message if it is in the message queue while kafka instances got bounce?

Is this a possible case for data loss? If due to unerlying hardware issue, kafka is having request queue queued up, If this time, we shutdown/bounce that kafka broker, What will happen to the follower?
what will happen to the message is the queue?
kafka.network:type=RequestChannel,name=RequestQueueSize
Size of the request queue. A congested request queue will not be able to process incoming or outgoing requests
Based on what I learn from kafka, this should be in networklayer, does that mean the message in the queue will be dropped, is this a case of data loss?
The message still present in the request queue has not yet been appended to log nor replicated to replicas.
Depending on your producer (mainly acks attribute) and broker configuration (min.insync.replicas), you're risking data loss.
Set acks to a higher value to ensure that your request has been processed.

ActiveMQ Artemis JMS Shared Subscription

I have a single node ActiveMQ instance with two competing consumers connected to a topic. The topic subscription is shared as per JMS 2.0 specification. Shared subscription does guarantee that only either of the subscribers (using same subscription name) gets the message. But what I noticed is that it does not guarantee that the second message is delivered only if the first one is acknowledged. In case if the first consumer takes time to acknowledge the message, the second message is delivered to the free consumer even before the acknowledgement of the first one is sent by the consumer to the broker. Is this a standard behaviour? And is there a way to stop the broker from delivering the second message before the acknowledgement of the first one?
ActiveMQ Artemis allows the exclusive queues. They are special queues which route all messages to only one consumer at a time.
Obviously exclusive queues have a draw back that you cannot scale out the consumers to improve consumption as only one consumer would technically be active.
However I would suggest to take a look at the message grouping to scale out your solution. Message groups are useful when you want all messages for a certain value of the property to be processed serially by the same consumer, without stopping the delivery of messages with different value of the property to other consumers.

Consumer timeout during rebalance

When a consumer drops from a group and a rebalance is triggered, I understand no messages are consumed -
But does an in-flight request for messages stay queued passed the max wait time?
Or does Kafka send any payload back during the rebalance?
UPDATE
For clarification, I'm referring specifically to the consumer polling process.
From my understanding, when one of the consumers drop from the consumer group, a rebalance of the partitions to consumers is performed.
During the rebalance, will an error be sent back to the consumer if it's already polled and waiting for max time to pass?
Or does Kafka wait the max time and send an empty payload?
Or does Kafka queue the request passed max wait time until the rebalance is complete?
Bottom line - I'm trying to explain periodic timeouts from consumers.
This may be in the docs, but I'm not sure where to find it.
Kafka producers doesn't directly send messages to their consumers, rather they send them to the brokers.
The inflight requests corresponds to the producer and not to the consumer.
Whether the consumer leaves a group and a rebalance is triggered or not is quite immaterial to the behaviour of the producer.
Producer messages are queued in the buffer, batched, optionally compressed and sent to the Kafka broker as per the configuration.
In-flight requests are the maximum number of unacknowledged requests
the client will send on a single connection before blocking.
Note that when we say ack, it is acknowledgement by the broker and not by the consumer.
Does Kafka send any payload back during the rebalance?
Kafka broker doesn't notify of any rebalance to its producers.

How does Kafka guarantee consumers doesn't read a single message twice?

How does Kafka guarantee consumers doesn't read a single message twice?
Or is the above scenario possible?
Could the same message be read twice by single or by multiple consumers?
There are many scenarios which cause Consumer to consume the duplicate message
Producer published the message successfully but failed to acknowledge which cause to retry the same message
Producer publishing a batch of the message but failed partially published messages. In that case, it will retry and resent the same batch again which will cause duplicate
Consumers receive a batch of messages from Kafka and manually commit their offset (enable.auto.commit=false).
If consumers failed before committing to Kafka, next time Consumers will consume the same records again which reproduce duplicate on the consumer side.
To guarantee not to consume duplicate messages the job's execution and the committing offset must be atomic to guarantee exactly-once delivery semantic at the consumer side.
You can use the below parameter to achieve exactly one semantic. But please you have understood this comes with a compromise with performance.
enable idempotence on the producer side which will guarantee not to publish the same message twice
enable.idempotence=true
Defined Transaction (isolation.level) is read_committed
isolation.level=read_committed
In Kafka Stream above setting can be achieved by setting Exactly-Once
semantic true to make it as unit transaction
Idempotent
Idempotent delivery enables producers to write messages to Kafka exactly once to a particular partition of a topic during the lifetime of a single producer without data loss and order per partition.
Transaction (isolation.level)
Transactions give us the ability to atomically update data in multiple topic partitions. All the records included in a transaction will be successfully saved, or none of them will be. It allows you to commit your consumer offsets in the same transaction along with the data you have processed, thereby allowing end-to-end exactly-once semantics.
The producer doesn't wait to write a message to Kafka whereas the Producer uses beginTransaction, commitTransaction, and abortTransaction(in case of failure) Consumer uses isolation. level either read_committed or read_uncommitted
read_committed: Consumers will always read committed data only.
read_uncommitted: Read all messages in offset order without waiting
for transactions to be committed
Please refer more in detail refrence
It is absolutely possible if you don't make your consume process idempotent.
For example; you are implementing at-least-one delivery semantic and firstly process messages and then commit offsets. It is possible to couldn't commit offsets because of server failure or rebalance. (maybe your consumer is revoked at that time) So when you poll you will get same messages twice.
To be precise, this is what Kafka guarantees:
Kafka provides order guarantee of messages in a partition
Produced messages are considered "committed" when they were written to the partition on all its in-sync replicas
Messages that are committed will not be losts as long as at least one replica remains alive
Consumers can only read messages that are committed
Regarding consuming messages, the consumers keep track of their progress in a partition by saving the last offset read in an internal compacted Kafka topic.
Kafka consumers can automatically commit the offset if enable.auto.commit is enabled. However, that will give "at most once" semantics. Hence, usually the flag is disabled and the developer commits the offset explicitly once the processing is complete.