GCP pubsub nack message with a reason in death letter topic - publish-subscribe

Is it possible to nack a message with a reason ? so that when the message goes to the death letter topic the consumer there can see the reason while the message failed processing.
For example, if I want to process a payment message, I could nack it becuase different reasons; no founds are left, amount to high, etc.
So that the consumer of the death letter topic know why a specific messsage was failed to process.
thanks!

Related

ActiveMQ Artemis JMS Shared Subscription

I have a single node ActiveMQ instance with two competing consumers connected to a topic. The topic subscription is shared as per JMS 2.0 specification. Shared subscription does guarantee that only either of the subscribers (using same subscription name) gets the message. But what I noticed is that it does not guarantee that the second message is delivered only if the first one is acknowledged. In case if the first consumer takes time to acknowledge the message, the second message is delivered to the free consumer even before the acknowledgement of the first one is sent by the consumer to the broker. Is this a standard behaviour? And is there a way to stop the broker from delivering the second message before the acknowledgement of the first one?
ActiveMQ Artemis allows the exclusive queues. They are special queues which route all messages to only one consumer at a time.
Obviously exclusive queues have a draw back that you cannot scale out the consumers to improve consumption as only one consumer would technically be active.
However I would suggest to take a look at the message grouping to scale out your solution. Message groups are useful when you want all messages for a certain value of the property to be processed serially by the same consumer, without stopping the delivery of messages with different value of the property to other consumers.

max-delivery-attempts does not work for un-acknowledged messages

I noted strange behavior in Artemins. I'm not sure if this is a bug or if I don't understand something.
I use Artemis Core API. I set autoCommitAcks to false. I noted that If message is received in MessageHandler but message is not acknowledged and session is rollbacked then Artemis does not consider this message as undelivered, Artemis consider this message as not sent to consumer at all. Parameter max-delivery-attempts does not work in this case. Message is redelivered an infinite number of times. Method org.apache.activemq.artemis.api.core.client.ClientMessage#getDeliveryCount returns 1 each time. Message has false value in Redelivered column in web console. If message is acknowledged before session rollback then max-delivery-attempts works properly.
What exactly is the purpose of message acknowledge? Acknowledge means only that message was received or acknowledge means that message was received and processed successfully? Maybe I can use acknowledge in both ways and it only depends on my requirements?
By message acknowledge I mean calling org.apache.activemq.artemis.api.core.client.ClientMessage#acknowledge method.
The behavior you're seeing is expected.
Core clients actually consume messages from a local buffer which is filled with messages from the broker asynchronously. The amount of message data in this local buffer is controlled by the consumerWindowSize set on the client's URL. The broker may dispatch many thousands of messages to various clients that sit in these local buffers and are never actually seen in any capacity by the consumers. These messages are considered to be in delivery and are not available to other clients, but they are not considered to be delivered. Only when a message is acknowledged is it considered to be delivered to a client.
If the client is auto-committing acknowledgements then acknowledging a message will quickly remove it from its respective queue. Once the message is removed from the queue it can no longer be redelivered because it doesn't exist anymore on the broker. In short, you can't get configurable redelivery semantics if you auto-commit acknowledgements.
However, if the client is not auto-committing acknowledgements and the consumer closes (for any reason) without committing the acknowledgements or calls rollback() on its ClientSession then the acknowledged messages will be redelivered according to the configured redelivery semantics (including max-delivery-attempts).

How to use exponential backoff with manual acknowledgment of Kafka messages

I have a Springboot project that receives messages on a Kafka topic. The application itself is fairly simple.
Receive a message
Perform a REST request to another service.
Acknowledge the message upon success.
If for whatever reason, the REST request fails, an indefinite exponential backoff is implemented to keep trying the REST request (Messages have an expiration time, which when exceeded, will be discarded to prevent messages permanently retrying).
Scenario:
A message is received that fails the REST request and is in the retry loop (Offset 1).
Another message is received that succeeds (Offset 2). The second message acknowledges the message because of its success, which sets the partition offset to 2.
If the application crashes or is brought down in this scenario, when it comes back up the message (offset 1) is lost.
What can I do to ensure the acknowledgments are sequential? Replaying messages is not a concern because there is deduplication logic implemented backed by a database that tracks message state.
an indefinite exponential backoff is implemented to keep trying the REST request
Another message is received that succeeds (Offset 2).
That won't happen if you implement the indefinite retry via an ExponentialBackOff configured in the listener container's SeekToCurrentErrorHandler.
You will always get offset 1 re-delivered until successful; you will not see offset 2 before that.

When message process fails, can consumer put back message to same topic?

Suppose one of my program consuming message from kafka topic. During processing of message, consumer access some db. Its db acccess fails due to xyz reason. But we dont have to abandon the message. We need to park the message for later processing. In JMS when message processing fails application container put back the message to the queue. It does not lost. In Kafka once it received its offset increases and next message comes. How to handle this ?
There are two approaches to achieve this.
Set the Kafka Acknowledge mode to manual and in case of error terminate the consumer thread without submitting the offset (If group management is enabled new consumer will be added after triggering re balancing and poll the same batch)
Second approach is simple, just have one error topic and publish messages to error topic in case of any error, so later you can consumer them or keep track of them.

How to drop queue message when there is no consume?(ActiveMQ)

Use ActiveMQ :
Senario:
Server will send many messages to client through Queue.
However ,i nedd to drop the message in the queue if there is no consumer(client)
Thanks in advance!
You can use non persistent messaging and the message is dropped if there is no active consumers.
Another alternative could be to use message expiry, so the message expires after X period, if they are not consumed from the queue.
Set a JMSExpiration on each message for some duration (30 seconds? 5 minutes?), and then any message that's not consumed after that amount of time (whether because there's no consumer or because the consumer's running behind) will be sent to the DLQ. Or if you don't want it in the DLQ, then configure the dead letter strategy to set processExpired=false or use a Discarding DLQ Plugin, both documented at http://activemq.apache.org/message-redelivery-and-dlq-handling.html.