I am going to keep it short, we have a product that uses BPM and internal queue with lots of EJBs (pojo implementation). We decided to add REST to the product and we zeroed in to JAX-RS and Swagger for documentation.
Now, we created endpoint pointing to a async scenario in a such a way that when REST request arrives we start the BPMN flow asynchronously and then we wait for agreed timeout duration for flows to finish so that we can parallelly send a response to internal queue, which receive message when BPMN flow finished processing and then can construct REST response.
I am looking for some enterprise pattern or some utility framework to help me achieve this and not invent it myself. I know Camel has lots of such patterns but I am not so sure I am looking for something available on JDK 1.6 compatible framework to simulate this synchronous behavior.
I would have something like a RxJava or some observer notifier pattern probably no internal JMS queues to pass message between threads. A concurrent and thread-safe soilutuion is what I am looking for.
I would have something like a RxJava or some observer notifier pattern probably no internal JMS queues to pass message between threads. A concurrent and thread-safe solution is what I am looking for.
If you are to be using JAX-RS, then you should probably become familiar with the Asynchronous Server API. For a slow but synchronous operation, you would simply dispatch a task to your executor, and resume the suspended request when you have a result.
Another approach is to store the suspended request in a shared data structure, with a worker responsible for observing the completed flows, looking up the suspended request and dispatching the response.
The ResponseServlet from Michael Barker's ticketing demonstration shows this basic idea (Barker's code uses servlets rather than JAX-RS, and Disruptor rather than RxJava, so you'll need to translate).
Additional resources on async response processing
https://dennis-xlc.gitbooks.io/restful-java-with-jax-rs-2-0-2rd-edition/content/en/part1/chapter13/server_asynchronous_response_processing.html
http://www.nurkiewicz.com/2014/12/asynchronous-timeouts-with.html
Related
I'm new to Service Fabric Reliable Actors technology and trying to figure out best practices for this specific scenario:
Let's say we have some legacy code that we want to run new code built on SF Reliable Actors. Actors of certain type "ActorExecutor" are going to asynchronously call some third-party service that sometimes could stuck for pretty long time, longer than actor's calling client is ready to wait, or even experience some prolonged underling communication issues. We do not want client (legacy code) to get blocked by any sort of issues in ActorExecutor, it does not expect to receive any value or status back from actor. Should we use SF ReliableQueue for that? Should we use some sort of actor-broker to receive requests from client and storing them to queue: Client->ActorBroker->ActorExecutor? Are reminders could be helpful here?
One more question in this regard: Giving the situation is possible when many thousands of actors might stuck in 'third-party incomplete call' in the same time, and we want to reactivate and repeat the very last call for them, should we write a new tool for that? In NServiceBus you can create an error queue in MSMQ where all failed like 'unable to process' messages to be landed, and then we were able to simply re-process them anytime in the future. From my understanding, there is no such thing in Service Fabric and it's something we need to built on our own.
An event driven approach can help you here. Instead of waiting for the Actor to return from the call to a service, you can enqueue some task on it, to request it to perform some action. The service calling Actor would function autonomously, processing items from it's task queue. This will allow it to perform retries and error handling. After a successful call, a new event can notify the rest of the system.
Maybe this project can help you to get started.
edits:
At this time, I don't believe you can use reliable collections in Actors. So a queue inside the state of an Actor, is a regular (read-only) collection.
Process the queue using an Actor Timer. Don't use the threadpool, as it's not persistent and won't survive crashes and Actor garbage collections.
I have a back-end Scala application that needs to integrate with RabbitMQ. The back-end Scala app executes long running tasks asynchronously. Messages to execute the tasks are queued into RabbitMQ by a web client. The back-end application then consumes each of these messages, executing the corresponding long-running tasks.
Should Scala app directly consume the message from RabbitMQ and simply have the corresponding tasks be processed using Futures? Or is it better to use Akka Actors to receive these messages from RabbitMQ, and then execute the long-running tasks?
What are the pro's and con's of each approach?
Futures sound like a simpler approach for your use case combined with the RabbitMQ Java client.
My model for choosing actors v. futures is: prefer futures, switching to actors when I feel I have a good use case for them (see Good use case for Akka for some examples). For example, if you were trying to divide-and-conquer the batch workloads (as the linked answer states), actors may serve your purposes well.
Use the RabbitMQ Java examples below as a starting point, modifying to do work in futures so that the thread polling the work queue is not blocked. I included links to both work queue and RPC examples in case you need to return some response (RabbitMQ is good at this case as it has a concept of correlationId built in).
Java RabbitMQ examples:
Work Queues
Remote procedure call (RPC)
While I was reading about automated Junit Test case generation in Eclipse I have come across with this sentence
the testcases were generated to test both the synchronous and asynchronous clients.
I googled a lot to find the definition of these two terms and the difference between them but couldn't find any appropriate answer.
Could anyone please explain what is synchronous and asynchronous clients?
From EAI Patterns:
In a synchronous implementation of a Web Service, the client connection remains open from the time the request is submitted to the server. The client will wait until the server sends back the response message....
At the present time, most Web Services toolkits only support synchronous messaging by default. However, using existing standards and tools such as asynchronous message queuing frameworks, some vendors have emulated asynchronous messaging for Web Services.
In asynchronous clients, clients should be able to handle incoming data from server after server has done its job. Asynchronous requests are like 'fire and forget' mechanism. Target will inform you about the progress.
I am trying to use Akka to implement the following (I think I'm trying to use Akka the proper way):
I have a system where I have n resource listeners. Essentially a resource listener is an entity that will listen on an input resource and publish what it sees (i.e. polling a database, tailing a log file, etc.).
So I want to use Akka actors to do these little bits of work units (listening on a resource). I've noticed that the Akka gives me a thread pool of t threads which may be less than the number of listeners. Unfortunately for me, getting a message from these resource listeners might be blocking, so it could take seconds, minutes, before the next message pops up.
Is there any way to suspend a resource listener so it leaves the thread to another actor and we'll come back to it a little later in time?
Executive Summary
What you want is for your producer API (the resources) to be asynchronous, or at least support non-blocking operations (so that you can do polling). If the API does not support that, then there is no way to retrofit this property, not even using the almighty actors ;-)
Strategies for Different Situations
Only Blocking API
If the resources only support the blocking getWhatever() method of retrieving things, then you must allocate one thread per resource. An Actor with a PinnedDispatcher could be a way to do this. But be aware that the actor will not be responsive while waiting for events from the resource.
Non-Blocking but Synchronous API
If there is a peek() or poll() method on the resource API, you can use one actor per resource, have them share a thread (or pool) and schedule the polling as required (i.e. every 100ms or whatever you need). This has the huge advantage that nobody is actually blocked and the whole system remains responsive. But latency for event reception will be of the order of your schedule interval.
Proper Asynchronous API
If you have enough good karma to encounter a nice asynchronous API, then simply register a callback which will send a message to the actor whenever an event occurs. Sadly, this is not the norm.
PS:
The JVM does not support wrapping up the current call stack, doing something else and return to that same processing state later. A method can only be popped of the stack when it is actually finished.
In general, you should try to avoid blocking operations in actors. For file IO, there are asynchronous libraries and for some databases, too. If that is not an option for you, you can set change the default dispatcher so that the underlying thread pool expands as needed.
One option is to call your blocking APIs inside Futures. The Futures should use an ExecutionContext (thread pool) that is separate from the Actors' ExecutionContext.
See this blog post for an example (specifically CacheActor.findValueForSender).
ASP.NET MVC 2 includes the built in feature of asynchronous controllers. My question is: Is there any benefits on using the asynchronous controllers to send messages to the bus if I'm not waiting for a reply from the bus?
Microsoft states this in their async controller documentation:
In general, use asynchronous pipelines when the following conditions are true:
The operations are network-bound or I/O-bound instead of CPU-bound.
Testing shows that the blocking operations are a bottleneck in site performance and that IIS
can service more requests by using asynchronous action methods for these blocking calls.
Parallelism is more important than simplicity of code.
You want to provide a mechanism that lets users cancel a long-running request.
When reading through the list and keeping in mind that we're not excepting any reply from the bus, I'm not seeing any benefits on using the async controllers over the synchronous ones. But is there?
If you don't need the response then you don't need async controllers.