Process messages from Azure in LIFO - rest

I am using the Azure REST API to read messages from an Azure Queue using Peek-Lock Message. Is there any way I can read the last message that was posted in the queue rather than reading from a queue based mechanism (FIFO)?
Also, is there a faster way to process messages from Azure other than using the Peek-Lock Message REST API?
Thanks!

Is there any way I can read the last message that was posted in the
queue rather than reading from a queue based mechanism (FIFO)?
Using the REST API, unfortunately there's no way to process the last message first. You would have to implement something on your own. If you know that your queue can't have more than 32 messages at a time, you could possibly get all 32 messages in one go and sort them on the client side based on the message insertion time. Yet another (crazy) idea would be to create a new queue for each message and name the queue using the following pattern: "q"-(DateTime.MaxValue.Ticks - DateTime.UtcNow.Ticks). Now list queues and get only the 1st queue. This will give you the message you last inserted.
Also, is there a faster way to process messages from Azure other than
using the Peek-Lock Message REST API?
One possibility could be to fetch more than one messages from a queue and process them in parallel on the client side.

Related

Artemis REST API (through Jolokia) for clearing all messages on all queues

Is there any API that can help me clear messages in all queues withing Artemis?
mbean: "org.apache.activemq.artemis:broker=\"1.1.1.1\",component=addresses,address=\"myaddressName\",subcomponent=queues,routing-type=\"multicast\",queue=\"myqueueName\""
operation: "removeAllMessages()"
type: "exec"
There's no programmatic way to simply delete all the data from every queue on the broker. However, you can combine a few management operations (e.g. in a script) to get the same result. You can use the getQueueNames method to get the name of every queue and then pass those names to the destroyQueue(String) method.
However, the simplest way to clear all the data would probably be to simply stop the broker, clear the data directory, and then restart the broker.

Sorting Service Bus Queue Messages

i was wondering if there is a way to implement metadata or even multiple metadata to a service bus queue message to be used later on in an application to sort on but still maintaining FIFO in the queue.
So in short, what i want to do is:
Maintaining Fifo, that s First in First Out structure in the queue, but as the messages are coming and inserted to the queue from different Sources i want to be able to sort from which source the message came from with for example metadata.
I know this is possible with Topics where you can insert a property to the message, but also i am unsure if it is possible to implement multiple properties into the topic message.
Hope i made my self clear on what i am asking is possible.
I assume you use .NET API. If this case you can use Properties dictionary to write and read your custom metadata:
BrokeredMessage message = new BrokeredMessage(body);
message.Properties.Add("Source", mySource);
You are free to add multiple properties too. This is the same for both Queues and Topics/Subscriptions.
i was wondering if there is a way to implement metadata or even multiple metadata to a service bus queue message to be used later on in an application to sort on but still maintaining FIFO in the queue.
To maintain FIFO in the queue, you'd have to use Message Sessions. Without message sessions you would not be able to maintain FIFO in the queue itself. You would be able to set a custom property and use it in your application and sort out messages once they are received out of order, but you won't receive message in FIFO order as were asking in your original question.
If you drop the requirement of having an order preserved on the queue, the the answer #Mikhail has provided will be suitable for in-process sorting based on custom property(s). Just be aware that in-process sorting will be not a trivial task.

How to design spring batch to avoid long queue of request and restart failed job

I am writing a project that will be generating reports. It would read all requests from a database by making a rest call, based on the type of request it will make a rest call to an endpoint, after getting the response it will save the response in an object and save it back to the database by making a call to an endpoint.
I am using spring-batch to handle the batch work. So far what I came up with is a single job (reader, processor, writer) that will do the whole things. I am not sure if this is the correct design considering
I do not want to queue up requests if some request is taking a long time to get a response back. [not sure yet]
I do not want to hold up saving response until all the responses are received. [using commit-internal will help]
If the job crashes for some reason, how can I restart the job [maybe using batch-admin will help but what are my other options]
By using chunk oriented processing Reader, Processor and Writer get executed in order until Reader has nothing to return.
If you can read one item at a time, process it and send it back to the endpoint that handles the persistence this approach is handy.
If you must read ALL the information at once the reader will get a big collection with all items and pass it to processor. The processor will process all the items and send the result to the writer. You cannot send just a few to the writer so you would have to do the persistence directly from processor and that would be against the design.
So, as I understand this, you have two options:
Design a reader that can read one item at a time. Use the chunk oriented processing that you already started to read one item, process it and send it back for persistence. Have a look at how other readers are implemented (like JdbcCursorItemReader).
You create a tasklet that reads the whole collection of items process it and sends them back for processing. You can break this in different tasklets.
commit-interval only controls after how many items transaction is commited. So it will not help you as all the processing and persistence is done by calling rest services.
I have figured out a design and I think it will work fine.
As for the questions that I asked, following are the answers:
Using asynchronous processors will help avoiding any queue.
http://docs.spring.io/spring-batch/trunk/reference/html/springBatchIntegration.html#asynchronous-processors
using commit-internal will solve it
This thread has the answer - Spring batch :Restart a job and then start next job automatically

Mule messages currently in the VM Queue

How can I get copy of all messages (or references of all messages) that are on a VMQueue?
I want to loop through the list of messages that are currently on VMQueue, check the payload of every message, and based on that make decision about next step in the flow.
Thanks.
You have to consume them with a vm:inbound-endpoint, you can't peek or browse them without actually taking them out of the queue.

Replacing a message in a jms queue

I am using activemq to pass requests between different processes. In some cases, I have multiple, duplicate message (which are requests) in the queue. I would like to have only one. Is there a way to send a message in a way that it will replace an older message with similar attributes? If there isn't, is there a way to inspect the queue and check for a message with specific attributes (in this case I will not send the new message if an older one exists).
Clarrification (based on Dave's answer): I am actually trying to make sure that there aren't any duplicate messages on the queue to reduce the amount of processing that is happening whenever the consumer gets the message. Hence I would like either to replace a message or not even put it on the queue.
Thanks.
This sounds like an ideal use case for the Idempotent Consumer which removes duplicates from a queue or topic.
The following example shows how to do this with Apache Camel which is the easiest way to implement any of the Enterprise Integration Patterns, particularly if you are using ActiveMQ which comes with Camel integrated out of the box
from("activemq:queueA").
idempotentConsumer(memoryMessageIdRepository(200)).
header("myHeader").
to("activemq:queueB");
The only trick to this is making sure there's an easy way to calculate a unique ID expression on each message - such as pulling out an XPath from the document or using as in the above example some unique message header
You could browse the queue and use selectors to identify the message. However, unless you have a small amount of messages this won't scale very well. Instead, you message should just be a pointer to a database-record (or set of records). That way you can update the record and whoever gets the message will then access the latest version of the record.