I've requirement to check on the status of Queue manager in datapower first before sending any request to the queue in case if it's down it'll route to another queue
is there any way to achieve this ?
Related
I have an [Azure Storage] queue where I put e-mail messages to be sent. Then, there is a separate service which monitors that queue and send e-mails using some service. In this particular case I'm using SendGrid.
So, theoretically, if the sender crashes right after a successful call to SendGrid Mail Send API (https://sendgrid.com/docs/API_Reference/Web_API_v3/Mail/index.html), the message will be returned to the queue and retried later. This may result in the same e-mail being delivered more than once, which could be really annoying for some type of e-mail.
The normal way to avoid this situation would be to provide some sort of idempotency key to Send API. Then the side being called can make sure the operation is performed at most once.
After careful reading of SendGrid documentation and googling, I could not find any way to achieve what I'm looking for here (at most once semantic). Any ideas?
Without support for an idempotency key in the API itself your options are limited I think.
You could modify your email sending service to dequeue and commit before calling the Send API. That way if the service fails to send the message will not be retried as it has already been removed from the queue, it will be sent at most once.
Additionally, you could implement some limited retries on particular http responses (e.g. 429 & 5xx) from SendGrid where you are certain the message was not sent and retrying might be useful - this would maintain "at most once" whilst lowering the failure rate. Probably this should include some backoff time between each attempt.
I'm trying very hard to understand the flow of a web request to a server which has a queue or message broker in the middle, but I can't find information about when and where the reply is given.
Imagine this use case:
Client A:
sends a invoice order request
the invoice is enqueued
the request is processed and dequeued.
at which time the client will receive a response?
right after the message is received by the queue?
right after the message is processed and dequeued? Other?
I'm asking because if the reply only comes after the message being processed the client might wait a long time. Imagine the message takes 3 minutes to process, would the client need to keep requesting the server to see if it is processed? or a connection is maintained using something like long polling?
I'm interested in scenarios using RabbitMq and kafka.
Advantage of having a messaging system is to ensure the frontend webserver and backend processing is decoupled. Best practice is Web server should publish the message and just wait for the messaging system to acknowledge receiving the message.
I have a message a want to dequeue then right after its dequeued I want to queue another message to a different queue. I want to do all this in the same transaction. is this possible with rabbitmq or any other queueing service?
The closest you can get to what you want with RabbitMQ is:
Use acks and publisher confirms
You receive a message and do not ack it.
Send your reply message.
Wait for confirm from the broker.
Once confirm had arrived, ack initial message.
But then, consider this failure situation:
Initial message received
Reply message sent
Your service failed before ACKing initial message
When your service is back, it will receive the initial message again
So you will need to use some deduplication mechanism etc.
My system has one server and many clients of 2 types. First type of client sends events to server. Second type receives notifications from server on these events. I'm currently testing RabbitMQ and NServiceBus to build message queue with the following requirements:
First type of client should have incoming queue for events (physically running on it) to prevent data loss on server disconnection.
Server should have outgoing queue for notifications (physically running on it) to prevent data loss on second type client disconnection.
[Client Type 1 + queue] -> [Server + queue] -> [Client Type 2]
Can this be achieved with one of specified components (or both of them)? If yes how?
I am not very familiar with rabbit, so I'll answer the question with nservicebus (NSB) in mind.
My system has one server and many clients of 2 types
OK, first thing, NSB does not have equivalent concepts of client and server. In NSB, all participating applications are called endpoints or services. Some endpoints are publishers, some are subscribers, some are senders, some are receivers. Some are any combination of the above.
First type of client sends events to server.
By convention, there are two types of message in NSB, commands and events. Commands are sent, events are published. So in this scenario, the type 1 clients would send commands to the server. In this scenario, a type 1 client would be a sender endpoint. The server is therefore a receiver endpoint.
Second type receives notifications from server on these events
So in this scenario, the server is a publisher endpoint, and a type 2 client is a subscriber endpoint. The server will publish an event which all subscribers would receive.
First type of client should have incoming queue for events (physically running on it) to prevent data loss on server disconnection
I am assuming that you mean the type 1 client needs to receive the data which it needs to send to the server from somewhere.
Well, in NSB, every endpoint has a queue, called the input queue. This is the means by which the endpoint receives messages.
In NSB the queue transport is abstracted, but out of the box the default is MSMQ (NSB also has support for Rabbit as the queue transport).
This provides a store-and-forward messaging pattern, which guarantees reliability. What this means is that if the server is unavailable, the queuing transport would wait until it was available again before transmitting the message.
So you could send a message onto the type 1 client input queue, which would then be converted into a command and sent to the server.
Server should have outgoing queue for notifications (physically
running on it) to prevent data loss on second type client
disconnection.
Similarly, when the server publishes the event (on receipt of the command from the type 1 client), the queuing transport would guarantee delivery of the event to all the subscribing type 2 clients.
A point of note: this would not be based on the server having an "outgoing queue", but rather the queuing transport would deliver the messages to the input queues of all subscribing endpoints.
So all your scenarios would be satisfied by using NServiceBus as part of your approach.
I am using org.hornetq.api.core.client
how can I guarantee the message that I am sending actually reached the queue (not the client, just the queue) ?
producer.send("validQueue",clientMessage)
please note that the queue is a valid queue .
this similar question is referring to invalid queue. other ones such as this one is relevant to the delivery to the client .
It really depends on how you are sending.
First question of yours was about
First of all, on JMS you have to way to send to an invalid queue since the producer will validate the queue's existence. On HornetQ core api you send to an address (not to a queue), and you may have an unbound queue. So you have to query if the address has queues or not.
Now, for confirmation the message was received:
Scenario I, persistent messages, non transactionally
Every message is sent blocked. The client will unblock as soon as the server acknowledged receiving the message. This is done automatically.. you don't have to do anything.
Scenario II, non persistent messages, non transactionally
There are no confirmations by default. The message is sent asynchronously. We assume the message is transient and it's not a big deal if you lost it. you can change that by setting block-on-non-persistent-send on the ServerLocator.
Scenario III, transactionally (either persistent or not).
As soon as you call commit the message is on the queues.
Scenario IV, Confirmation send
You set a callback and you get a method call as soon as the server acked it on the queues. Look on the manual for confirmation callback. There's also the same feature on JMS2.