async call back using scala and play2 or spray - scala

I have a systems design challenge that I would like to get some community feedback on.
Basic system structure:
[Client] ---HTTP-POST--> [REST Service] ---> [Queue] ---> [Processors]
[Client] POSTs json to [REST Service] for processing.
Based on request, [Rest Services] sends data to various queues to be picked up by various processors written in various languages and running in different processes.
Work is parallelized in each processor but can still take up to 30 seconds to process. The time to process is a function of the complexity of the data and cannot be speed up.
The result cannot be streamed back to the client as it is completed because there is a final post processing step that can only be completed once all the sub steps are completed.
Key challenge: Once the post processing is complete, the client either needs to:
be sent the results after the client has been waiting
be notified async that the job is completed and passed an id to request the final result
Design requirements
I don't want to block the [REST Service]. It needs to take the incoming request, route the data to the appropriate queues for processing in other processes, and then be immediately available for the next incoming request.
Normally I would have used actors and/or futures/promises so the [REST Service] is not blocked when waiting for background workers to complete. The challenge here is the workers doing the background work are running in separate processes/VMs and written in various technology stacks. In order to pass these messages between heterogeneous systems and to ensure integrity of the request lifetime, a durable queue is being used (not in memory message passing or RPC).
Final point of consideration, in order to scale, there are a load balanced set of [REST Services] and [Processors] in respective pools. Therefore, since the messages from the [REST Service] to the [Processor] need to be sent asynchronously via a queue (and everything is running is separate processes), there is no way to correlate the work done in a background [Processor] back to its original calling [REST Service] instance in order to return the final processed data in a promise or actor message and finally pass the response back to the original client.
So, the question is, how to make this correlation? Once the all the background processing is completed, I need to get the result back to the client either via a long waited response or a notification (I do not want to use something like UrbanAirship as most of the clients are browsers or other services.
I hope this is clear, if not, please ask for clarification.
Edit: Possible solution - thoughts?
I think I pass a spray RequestContext to any actor which can then response back to the client (does not have to be the original actor that received HTTP request). If this is true, can I cache the RequestContext and then use it later to asynchronously send the response to the appropriate client using this cached RequestContext when the processing is completed?

Well, it's not the best because it requires more work from your Client, but it sounds like you want to implement a webhook. So,
[Client] --- POST--> [REST Service] ---> [Calculations] ---> POST [Client]
[Client] --- GET
For explanation:
Client sends a POST request to your service. Your Service then does whatever processing necessary. Upon completion, your service will then send an HTTP-POST to a URL that the Client has already set. With that POST data, the Client will then have the necessary information to then do a GET request for the completed data.

Related

How application server handle multiple requests to save data into table

I have created a web application in jsf and it has a button.
If the button is clicked then it will go to the server side and execute the below function to save the data in a table and I am using mybatis for this.
public void save(A a)
{
SqlSession session = null;
try{
session = SqlConnection.getInstance().openSession();
TestMapper testmap= session.getMapper(TestMapper.class);
testmap.insert(a);
session .commit();
}
catch(Exception e){
}
finally{
session.close();
}
}
Now i have deployed this application in an application server JBoss(wildfly).
As per my understanding, when multiple users try to access the application
by hitting the URL, the application server creates thread for each of the user request.
For example if 4 clients make request then 4 threads will be generated that is t1,t2,t3 and t4.
If all the 4 users hit the save button at the same time, how save method will be executed, like if t1 access the method and execute insert statement
to insert data into table, then t2,t3 and t4 or simultaneously all the 4 threads will execute the insert method and insert data?
To bring some context I would describe first two possible approaches to handling requests. In this case HTTP but these approaches do not depend on the protocol used and the main important thing is that requests come from the network and for their execution some IO is needed (either access to filesystem or database or network calls to other systems). Note that the following description has some simplifications.
These two approaches are:
synchronous
asynchronous
In general to process the typical HTTP request that involves DB access at least four IO operations are needed:
request handler needs to read the request data from the client socket
request handler needs to write request to the socket connected to the DB
request handler needs to read response from the DB socket
request handler needs to write the response to the client socket
Let's see how this is done for both cases.
Synchronous
In this approach the server has a pool (think a collection) of threads that are ready to serve a request.
When the request comes in the server borrows a thread from the pool and executes a request handler in that thread.
When the request handler needs to do the IO operation it initiates the IO operation and then waits for its completion. By wait I mean that thread execution is blocked until the IO operation completes and the data (for example response with the results of the SQL query) is available.
In this case concurrency that is requests processing for multiple clients simultaneously is achieved by having some number of threads in the pool. IO operations are much slower if compared to CPU so most of the time the thread processing some request is blocked on IO operation and CPU cores can execute stages of the request processing for other clients.
Note that because of the slowness of the IO operations thread pool used for handling HTTP requests is usually large enough. Documentation for sync requests processing subsystem used in wildfly says about 10 threads per CPU core as a reasonable value.
Asynchronous
In this case the IO is handled differently. There is a small number of threads handling IO. They all work the same way and I'll describe one of them.
Such thread runs a loop which basically waits for events and every time an event happen it calls a handler for an event.
The first such event is new request. When a request processing is started the request handler is invoked from the loop that is run by one of the IO threads. The first thing the request handler is doing it tries to read request from the client socket. So the handler initiates the IO operation on the client socket and returns control to the caller. That means that the thread is released and it can process another event.
Another event happens when the IO operations that reads from client socket got some data available. In this case the loop invokes the handler at the point where the handler returned the control to the loop after the IO initiate namely it is resumed on the next step that processes the input data (like parses HTTP parameters) and initiates new IO operation (in this case request to the DB socket). And again the handler releases the thread so it can handler other events (like completion of IO operations that are part of other clients' requests processing).
Given that IO operations are slow compared to the speed of CPU itself one thread handling IO can process a lot of requests concurrently.
Note: that it is important that the requests handler code never uses any blocking operation (like blocking IO) because that would steal the IO thread and will not allow other requests to proceed.
JSF and Mybatis
In case of JSF and mybatis the synchronous approach is used. JSF uses a servlet to handle requests from the UI and servlets are handled by the synchronous processors in WildFly. JDBC which is used by mybatis to communicate to a DB is also using synchronous IO so threads are used to execute requests concurrently.
Congestions
All of the above is written with the assumption that there is no other sources of the congestion. By congestion here I mean a limitation on the ability of the certain component of the system to execute things in parallel.
For example imagine a situation that a database is configured to only allow one client connection at a time (this is not a reasonable configuration and I'm using this only to demonstrate the idea). In this case even if multiple threads can execute the code of the save method in parallel all but one will be blocked at the moment when they try to open the connection to the database.
Another similar example is if you are using sqlite database. It only allows one client to write to the DB at a time. So at the point when thread A tries to execute insert it will be blocked if the is another thread B that is already executing the insert. And only after the commit executed by the thread B the thread A would be able to proceed with the insert. The time A depends on the time it take for B to execute its request and the number of other threads waiting to do a write operation to the same DB.
In practice if you are using a RDBMS that scales better (like postgresql, mysql or oracle) you will not hit this problem when using the small number of connection. But it may become a problem when there is a big number of concurrent requests and there is a limitation in the DB on the number of client connections or the connection pool is used to limit the number of connections on the application side. In this case if there are already many connections to the database the new clients will wait until existing requests are finished and connections are closed.

Sending response in a Pub-Sub architecture

At the moment I have a single AWS EC2 instance which handles all incoming http client requests. It analyses each request and then decides which back end worker server should handle the request and then makes a http call to the chosen server. The back end server then responds when it has processed the request. The front end server will then respond to the client. The front end server is effectively a load balancer.
I now want to go to a Pub-Sub architecture instead of the front end server pushing the requests to the back end instances. The front end server will do some basic processing and then simply put the request into an SNS queue and the logic of which back end server should handle the request is left to the back end servers themselves.
My question is with this model what is the best way to have the back end servers notify the front end server that they have processed the request? Previously they just replied to the http request the front end server sent but now there is no direct request, just an item of work being published to a queue and a back end instance picking it off the queue.
Pubsub architectures are not well suited to responses/acknowledgements. Their fire-and-forget broadcasting pattern decouples publishers and the subscribers: a publisher does not know if or how many subscribers there are, and the subscribers do no know which publisher generated a message. Also, it can be difficult to guarantee sequence of responses, they won't necessarily match the sequence of messages due to the nature of network comms and handling of messages can take different amounts of time etc. So each message that needs to be acknowledge needs a unique ID that the subscriber can include in its response so the publisher can match a response with the message sent. For example:
publisher sends message "new event" and provides a UUID for the
event
many subscribers get the message; some may be the handlers for
the request, but others might be observers, loggers, analytics, etc
if only one subscriber handles the message (e.g. the first
subscriber to get a key from somewhere), that subscriber generates a
message "new event handled" and provides a UUID
the original
publisher, as well as any number of other subscribers, may get that
message;
the original publisher sees the ID is
in its cache as an unconfirmed message, and now marks it as
confirmed
if a certain amount of time passes without receiving a
confirmation with given ID, the original publisher republishes the
original message, with a new ID, and removes the old ID from cache.
In step 3, if many subscribers handled the message instead of just one, then it
less obvious how the original publisher should handle "responses": how does it
know how many subscribers handle the message, some could be down or
too busy to respond, or some may be in the process of responding by the time
the original publisher determines that "not enough handlers have
responded".
Publish-subscribe architectures should be designed to not request any response, but instead to check for some condition that should have happened as a result of the command being handled, such as a thumbnail having gotten generated (it can assume as a result of a handler of the message).

Netty 4.0 SO_Keeplive one connection send many request to server How to process request concurrently

one connection send many request to server
How to process request concurrently.
Please use a simple example like timeserver or echoserver in netty.io
to illustrate the operation.
One way I could find out is to create a separate threaded handler that will be called as in a producer/consumer way.
The producer will be your "network" handler, giving message to the consumers, therefore not waiting for any wanswear and being able then to proceed with the next request.
The consumer will be your "business" handler, one per connection but possibly multi-threaded, consuming with multiple instances the messages and being able to answer using the Netty's context from the connection from which it is attached.
Another option for the consumer would be to have only one handler, still multi-threaded, but then message will come in with the original Netty's Context such that it can answear to the client, whatever the connection attached.
But the difficulties will come soon:
How to deal with an answear among several requests on client side: let say the client sends 3 requests A, B and C and the answears will come back, due to speed of the Business handler, as C, A, B... You have to deal with it, and knowing for which request the answer is.
You have to ensure all the ways the context given in parameter is still valid (channel active), if you don't want to have too many errors.
Perhaps the best way would be to however handle your request in order (as Netty does), and keep the answear's action as quick as possible.

Gatling synchronous Http request/response chain

I have implemented a chain of executions and each execution will send a HTTP request to the server and does check if the response status is 2XX. I need to implement a synchronous model in which the next execution in the chain should only get triggered when the previous execution is successful i.e response status is 2xx.
Below is the snapshot of the execution chain.
feed(postcodeFeeder).
exec(Seq(LocateStock.locateStockExecution, ReserveStock.reserveStockExecution, CancelOrder.cancelStockExecution,
ReserveStock.reserveStockExecution, ConfirmOrder.confirmStockExecution, CancelOrder.cancelStockExecution)
Since gatling has asynchronous IO model, what am currently observing is the HTTP requests are sent to the server in an asynchronous manner by a number of users and there is no real dependency between the executions with respect to a single user.
Also I wanted to know for an actor/user if an execution in a chain fails due the check, does it not proceed with the next execution in the chain?
there is no real dependency between the executions with respect to a single user
No, you are wrong. Except when using "resources", requests are sequential for a given user. If you want to stop the flow for a given user when it encounters an error, you can use exitblockonfail.
Gatling does not consider the failure response from the previous request before firing next in chain. You may need to cover the entire block with exitBlockOnFail{} to block the gatling to fire next.

RabbitMQ temporary queues for status updates in asynchronous REST

I am designing a REST API which works according to the asynchronous design detailed here. I am using RabbitMQ to enqueue the initial requests - so the client makes a call, receives a 202 Accepted response, and the job is enqueued by the server. In order that clients can get status updates ('percent done') on tasks we have a secondary queue resource, just as in the linked article.
Given that each task has its own queue resource it seems we need one temporary RabbitMQ queue per task. I am wondering whether this is a wise design choice, even though I can't really see any other options. It seems unlikely to be very efficient, and I am uneasy about the possibility of having lots of temporary queues being created like this, especially as I cannot see a way to guarantee that they will all be cleaned up (despite RabbitMQ's auto-delete facility). Prior to RabbitMQ I was using SQS for this, and have painful experience of what can happen in this respect.
I note that a similar type of queue management will be already familiar to those using RabbitMQ in RPC style. Is there a possible alternative, however?
Firs of all, each queue utilize apr. 20k memory, so having a lot of them is up to you and your hardware. But in general, it smells. Really.
For status updates I see nothing wrong to use some key-value database, like redis or even memcache and update percent done there. Thus status check (as well as updating) will be damn fast, simple and lightweight.
Update:
I can suggest further architecture:
Client POST task payload to some endpoint, say /tasks.
Application generate unique task id (uuid aka guid is your friend here), published that task with it id to RabbitMQ queue and then return id to client.
Workers (one or many) consume tasks from RabbitMQ and depends of processing step update Redis key which has task id with some value (step, percentage done, estimated time to receive result). So, it may be looks like SET task:{id} "<some valye>". When task completed by worker it MAY update Redis key with task result or store it somewhere else and then set Redis key represent task is finished.
Client MAY time to time GET /tasks/{id} to receive task status or it result.
When Application receive GET /tasks/{id} it return task status represented by Redis key (GET task:{id}). If key is not set (nil) then task is not yet taken by worker.
P.S.
RPC is something different from what you asked, but i would recommend to read this question for some details.