RabbitMQ temporary queues for status updates in asynchronous REST - rest

I am designing a REST API which works according to the asynchronous design detailed here. I am using RabbitMQ to enqueue the initial requests - so the client makes a call, receives a 202 Accepted response, and the job is enqueued by the server. In order that clients can get status updates ('percent done') on tasks we have a secondary queue resource, just as in the linked article.
Given that each task has its own queue resource it seems we need one temporary RabbitMQ queue per task. I am wondering whether this is a wise design choice, even though I can't really see any other options. It seems unlikely to be very efficient, and I am uneasy about the possibility of having lots of temporary queues being created like this, especially as I cannot see a way to guarantee that they will all be cleaned up (despite RabbitMQ's auto-delete facility). Prior to RabbitMQ I was using SQS for this, and have painful experience of what can happen in this respect.
I note that a similar type of queue management will be already familiar to those using RabbitMQ in RPC style. Is there a possible alternative, however?

Firs of all, each queue utilize apr. 20k memory, so having a lot of them is up to you and your hardware. But in general, it smells. Really.
For status updates I see nothing wrong to use some key-value database, like redis or even memcache and update percent done there. Thus status check (as well as updating) will be damn fast, simple and lightweight.
Update:
I can suggest further architecture:
Client POST task payload to some endpoint, say /tasks.
Application generate unique task id (uuid aka guid is your friend here), published that task with it id to RabbitMQ queue and then return id to client.
Workers (one or many) consume tasks from RabbitMQ and depends of processing step update Redis key which has task id with some value (step, percentage done, estimated time to receive result). So, it may be looks like SET task:{id} "<some valye>". When task completed by worker it MAY update Redis key with task result or store it somewhere else and then set Redis key represent task is finished.
Client MAY time to time GET /tasks/{id} to receive task status or it result.
When Application receive GET /tasks/{id} it return task status represented by Redis key (GET task:{id}). If key is not set (nil) then task is not yet taken by worker.
P.S.
RPC is something different from what you asked, but i would recommend to read this question for some details.

Related

Is it possible to combine REST and messaging for microservices?

We have the first version of an application based on a microservice architecture. We used REST for external and internal communication.
Now we want to switch to AP from CP (CAP theorem)* and use a message bus for communication between microservices.
There is a lot of information about how to create an event bus based on Kafka, RabbitMQ, etc.
But I can't find any best practices for a combination of REST and messaging.
For example, you create a car service and you need to add different car components. It would make more sense, for this purpose, to use REST with POST requests. On the other hand, a service for booking a car would be a good task for an event-based approach.
Do you have a similar approach when you have a different dictionary and business logic capabilities? How do you combine them? Just support both approaches separately? Or unify them in one approach?
* for the first version, we agreed to choose consistency and partition tolerance. But now availability becomes more important for us.
Bottom line up front: You're looking for Command Query Responsibility Segregation; which defines an architectural pattern for breaking up responsibilities from querying for data to asking for a process to be run. The short answer is you do not want to mix the two in either a query or a process in a blocking fashion. The rest of this answer will go into detail as to why, and the three different ways you can do what you're trying to do.
This answer is a short form of the experience I have with Microservices. My bona fides: I've created Microservices topologies from scratch (and nearly zero knowledge) and as they say hit every branch on the way down.
One of the benefits of starting from zero-knowledge is that the first topology I created used a mixture of intra-service synchronous and blocking (HTTP) communication (to retrieve data needed for an operation from the service that held it), and message queues + asynchronous events to run operations (for Commands).
I'll define both terms:
Commands: Telling a service to do something. For instance, "Run ETL Batch job". You expect there to be an output from this; but it is necessarily a process that you're not going to be able to reliably wait on. A command has side-effects. Something will change because of this action (If nothing happens and nothing changes, then you haven't done anything).
Query: Asking a service for data that it holds. This data may have been there because of a Command given, but asking for data should not have side effects. No Command operations should need to be run because of a Query received.
Anyway, back to the topology.
Level 1: Mixed HTTP and Events
For this first topology, we mixed Synchronous Queries with Asynchronous Events being emitted. This was... problematic.
Message Buses are by their nature observable. One setting in RabbitMQ, or an Event Source, and you can observe all events in the system. This has some good side-effects, in that when something happens in the process you can typically figure out what events led to that state (if you follow an event-driven paradigm + state machines).
HTTP Calls are not observable without inspecting network traffic or logging those requests (which itself has problems, so we're going to start with "not feasible" in normal operations). Therefore if you mix a message based process and HTTP calls, you're going to have holes where you can't tell what's going on. You'll have spots where due to a network error your HTTP call didn't return data, and your services didn't continue the process because of that. You'll also need to hook up Retry/Circuit Breaker patterns for your HTTP calls to ensure they at least try a few times, but then you have to differentiate between "Not up because it's down", and "Not up because it's momentarily busy".
In short, mixing the two methods for a Command Driven process is not very resilient.
Level 2: Events define RPC/Internal Request/Response for data; Queries are External
In step two of this maturity model, you separate out Commands and Queries. Commands should use an event driven system, and queries should happen through HTTP. If you need the results of a query for a Command, then you issue a message and use a Request/Response pattern over your message bus.
This has benefits and problems too.
Benefits-wise your entire Command is now observable, even as it hops through multiple services. You can also replay processes in the system by rerunning events, which can be useful in tracking down problems.
Problems-wise now some of your events look a lot like queries; and you're now recreating the beautiful HTTP and REST semantics available in HTTP for messages; and that's not terribly fun or useful. As an example, a 404 tells you there's no data in REST. For a message based event, you have to recreate those semantics (There's a good Youtube conference talk on the subject I can't find but a team tried to do just that with great pain).
However, your events are now asynchronous and non-blocking, and every service can be refactored to a state-machine that will respond to a given event. Some caveats are those events should contain all the data needed for the operation (which leads to messages growing over the course of a process).
Your queries can still use HTTP for external communication; but for internal command/processes, you'd use the message bus.
I don't recommend this approach either (though it's a step up from the first approach). I don't recommend it because of the impurity your events start to take on, and in a microservices system having contracts be the same throughout the system is important.
Level 3: Producers of Data emit data as events. Consumers Record data for their use.
The third step in the maturity model (and we were on our way to that paradigm when I departed from the project) is for services that produce data to issue events when that data is produced. That data is then jotted down by services listening for those events, and those services will use that (could be?) stale data to conduct their operations. External customers still use HTTP; but internally you emit events when new data is produced, and each service that cares about that data will store it to use when it needs to. This is the crux of Michael Bryzek's talk Designing Microservices Architecture the Right way. Michael Bryzek is the CTO of Flow.io, a white-label e-commerce company.
If you want a deeper answer along with other issues at play, I'll point you to my blog post on the subject.

Pessimistic locking mechanism with IReliableQueue in Azure Service Fabric

I understand locking is scoped per transaction for IReliableQueue in Service Fabric. I have a requirement where once the data is read from the ReliableQueue within a transaction, I need to pass the data back to my client and preserve the lock on that data for a certain duration and if the processing fails in client, then write the data back to queue (preferably at the head so that it is picked first in next iteration).
Service Fabric doesn't support this. I recommend you look into using an external queuing mechanism for this. For example, Azure Service Bus Queues provides the functionality you describe.
You can use this package to receive SB messages within your services.
preserve the lock on that data for a certain duration
We made that once or twice too in other contexts with success using modifiable-lists and a document-field LockedUntillUtc (initialized to mininimum or null, or using a different reliable collection of locked keys (sorted on LockedUntillUtc?) - which best suites your needs?).
If you can't trust your clients to adhere to such a lock-request and write/un-lock-request contract, consider an ETag pattern - only returned on a successfull lock-request...

MSMQ as a job queue

I am trying to implement job queue with MSMQ to save up some time on me implementing it in SQL. After reading around I realized MSMQ might not offer what I am after. Could you please advice me if my plan is realistic using MSMQ or recommend an alternative ?
I have number of processes picking up jobs from a queue (I might need to scale out in the future), once job is picked up processing follows, during this time job is locked to other processes by status, if needed job is chucked back (status changes again) to the queue for further processing, but physically the job still sits in the queue until completed.
MSMQ doesn't let me to keep the message in the queue while working on it, eg I can peek or read. Read takes message out of queue and peek doesn't allow changing the message (status).
Thank you
Using MSMQ as a datastore is probably bad as it's not designed for storage at all. Unless the queues are transactional the messages may not even get written to disk.
Certainly updating queue items in-situ is not supported for the reasons you state.
If you don't want a full blown relational DB you could use an in-memory cache of some kind, like memcached, or a cheap object db like raven.
Take a look at RabbitMQ, or many of the other messages queues. Most offer this functionality out of the box.
For example. RabbitMQ calls what you are describing, Work Queues. Multiple consumers can pull from the same queue and not pull the same item. Furthermore, if you use acknowledgements and the processing fails, the item is not removed from the queue.
.net examples:
https://www.rabbitmq.com/tutorials/tutorial-two-dotnet.html
EDIT: After using MSMQ myself, it would probably work very well for what you are doing, as far as I can tell. The key is to use transactions and multiple queues. For example, each status should have it's own queue. It's fairly safe to "move" messages from one queue to another since it occurs within a transaction. This moving of messages is essentially your change of status.
We also use the Message Extension byte array for storing message metadata, like status. This way we don't have to alter the actual message when moving it to another queue.
MSMQ and queues in general, require a different set of patterns than what most programmers are use to. Keep that in mind.
Perhaps, if you can give more information on why you need to peek for messages that are currently in process, there would be a way to handle that scenario with MSMQ. You could always add a database for additional tracking.

RESTful Job Assignment

I have a collection of jobs that need processing, http://example.com/jobs. Each job has a status of "new", "assigned" or "finished".
I want slave processes to pick off one "new" job, set it's status to "assigned", and then process it. I want to ensure each job is only processed by a single slave.
I considered having each slave do the following:
GET http://example.com/jobs
Pick one that's "new" and do an http PUT to http://example.com/jobs/123 {"status=assigned"}.
Repeat
The problem is that another slave may have assigned the job to itself between the GET and PUT. I could have the second PUT return a 409 (conflict), which would signal the second slave to try a different job.
Am I on the right track, or should I do this differently?
I would have one process that picks "new" jobs and assigns them. Other processes would independently go in and look to see if they've been assigned a job. You'd have to have some way to identify which process a job is assigned to, so some kind of slave process id would be called for.
(You could use POST too, as what you're trying to do shouldn't be idempotent anyway).
You could give each of your clients a unique ID (possibly a UUID) and have an "assignee/worker" field in your job resource.
GET http://example.com/jobs/
POST { "worker"=$myID } to http://example.com/jobs/123
GET http://example.com/jobs/123 and check that the worker ID is that of the client
You could combine this with conditional requests too.
On top of this, you could have a time out feature if the job queue doesn't hear back from a given client, it puts it back in the queue.
It looks that the statuses are an essential part of your job-domain model. So I would expose this as dedicated sub-resources
# 'idle' is what you called 'new'
GET /jobs/idle
GET /jobs/assigned
# start job
PUT /jobs/assigned/123
Slave is only allowed to gather jobs by GET /jobs/idle. This never includes jobs which are running. Still there could be race conditions (two slaves are getting the set, before one them has started job). I think 400 Bad Request or your mentioned 409 Conflict are alright with that.
I prefer above resource-structure instead of working with payloads (which often looks more "procedural" to me).
I was a little to specific, I don't actually care that the slave gets to pick the job, just that it gets a unique one.
With that in mind, I think #manuel aldana was on the right track, but I've made a few modifications.
I'll keep the /jobs resource, but also expose a /jobs/assigned resource. A single job may exist in both collections.
The slave can POST to /jobs/assigned with no parameters. The server will choose one "new" job, move it to "assigned", and return the url (/jobs/assigned/{jobid} or /jobs/{jobid}) in the Location header with a 201 status.
When the slave finishes the job, it will PUT to /jobs/{jobid} (status=finished).

Memcache-based message queue?

I'm working on a multiplayer game and it needs a message queue (i.e., messages in, messages out, no duplicates or deleted messages assuming there are no unexpected cache evictions). Here are the memcache-based queues I'm aware of:
MemcacheQ: http://memcachedb.org/memcacheq/
Starling: http://rubyforge.org/projects/starling/
Depcached: http://www.marcworrell.com/article-2287-en.html
Sparrow: http://code.google.com/p/sparrow/
I learned the concept of the memcache queue from this blog post:
All messages are saved with an integer as key. There is one key that has the next key and one that has the key of the oldest message in the queue. To access these the increment/decrement method is used as its atomic, so there are two keys that act as locks. They get incremented, and if the return value is 1 the process has the lock, otherwise it keeps incrementing. Once the process is finished it sets the value back to 0. Simple but effective. One caveat is that the integer will overflow, so there is some logic in place that sets the used keys to 1 once we are close to that limit. As the increment operation is atomic, the lock is only needed if two or more memcaches are used (for redundancy), to keep those in sync.
My question is, is there a memcache-based message queue service that can run on App Engine?
I would be very careful using the Google App Engine Memcache in this way. You are right to be worrying about "unexpected cache evictions".
Google expect you to use the memcache for caching data and not storing it. They don't guarantee to keep data in the cache. From the GAE Documentation:
By default, items never expire, though
items may be evicted due to memory
pressure.
Edit: There's always Amazon's Simple Queueing Service. However, this may not meet price/performance levels either as:
There would be the latency of calling from the Google to Amazon servers.
You'd end up paying twice for all the data traffic - paying for it to leave Google and then paying again for it to go in to Amazon.
I have started a Simple Python Memcached Queue, it might be useful:
http://bitbucket.org/epoz/python-memcache-queue/
If you're happy with the possibility of losing data, by all means go ahead. Bear in mind, though, that although memcache generally has lower latency than the datastore, like anything else, it will suffer if you have a high rate of atomic operations you want to execute on a single element. This isn't a datastore problem - it's simply a problem of having to serialize access.
Failing that, Amazon's SQS seems like a viable option.
Why not use Task Queue:
https://developers.google.com/appengine/docs/python/taskqueue/
https://developers.google.com/appengine/docs/java/taskqueue/
It seems to solve the issue without the likely loss of messages in Memcached-based queue.
Until Google impliment a proper job-queue, why not use the data-store? As others have said, memcache is just a cache and could lose queue items (which would be.. bad)
The data-store should be more than fast enough for what you need - you would just have a simple Job model, which would be more flexible than memcache as you're not limited to key/value pairs