How will jbpm handle concurrent workflows in memory - workflow

I wanted to use jbpm as a library in my web service. Wanted to know the implementation details of how jbpm handles concurrent workflows in memory.
I want details with respect to scale , robustness
thanks in advance.

So, your requirement is to run business processes in-memory only? For each instance you will have a separate process instance and you can have as many as your memory can handle. in jBPM there is an object called KieSession that keeps all the running process instances. In the case that you can to scale to multiple nodes you can have multiple KieSessions in each node.
Can you please elaborate more the question, if you are looking for more specific answers?
Regards

Related

Asynchronous Computation in scalajs Diode

I have an user interface and provide a button to the user, which executes the function longComputation(x: A): A and updates then the user interface (particularly the model) with the new result. This function may take longer to compute the result and should therefore compute in parallel.
Diode provides me with Effect, PotAction, and AsyncAction. I read the documentation about Effects and PotActions/AsyncActions, but I cannot even get a simple example to work.
Can someone point me to or provide an simple working example?
I created a ScalaFiddle based on the SimpleCounter example. There is a LongComputation button, which should run in parallel; but is not.
In JavaScript you cannot run things in parallel without using Web Workers because the JS engine is single-threaded. Web Workers are more like separate processes than threads, as they don't share memory and you need to send messages to communicate between workers and the main thread.
Have less than 50 reputation to comment, so I have to create a new answer instead of commenting #ochrons answer:
As mentioned Web Workers communicate via message passing and share no state. This concept is somehow similar to Akka - even Akka.js exists which enables you to use actor systems in ScalaJS and therefore the browser.

Scheduling java method wit persistance

I need to execute a call to a particular method daily or more, considering that the app may and the machine may reboot.
I saw examples where they just put the thread to sleep but I need persistance, managing system rebooting.
I have to be sure that if I switch off my machine when I reboot it reprises task execution.
I found schedulers as cron4j and quartz but don't get if it's possible, and if it is, how to do that.
With Quartz you will only need to configure it with a persistent job store implementation and that is pretty much all there is to it. I suggest that you read through the Quartz scheduler tutorial, especially the chapter that describes Quartz job stores.

reconstructing drools StatefulKnowledgeSession after server restart

Assume I created a StatefulKnowledgeSession from a given knowledgebase.
The JBPM process in this session can last for multiple days so we need to persist the session between invocations.
Now the knowledge resouces (JBPM Process definitions (BPMN files)) may change while a given process instance is running.
Upon server restart, I will need to reconstruct the correct knowledgebase in order to load the session.
But how do I know which resources to use to rebuild the knowledgebase?
Does a session keep track of the resources which were used to start it?
Do I need to build and manage knowledgebaseconfigurations?
Any help would be greatly appreciated!
Michiel
Typically your application would recreate the kbase the same way it was created the first time. So depending on how you create your kbase, this will involve simply loading the necessary processes again from classpath, from filesystem or from guvnor repository for example.
The session itself doesn't keep track of the kbase (so it can recreate it).
Kris

Celery vs Ipython parallel

I have looked at the documentation on both, but am not sure what's the best choice for a given application. I have looked closer at celery, so the example will be given in those terms.
My use case is similar to this question, with each worker loading a large file remotely (one file per machine), however I also need workers to contain persistent objects. So, if a worker completes a task and returns a result, then is called again, I need to use a previously created variable for the new task.
Repeating the object creation at each task call is far too wasteful. I haven't seen a celery example to lead me to believe this is possible, I was hoping to use the worker_init signal to accomplish this.
Finally, I need a central hub to keep track of what all the workers are doing. This seems to imply a client-server architecture rather than the one provided by Celery, is this correct? If so, would IPython Parallel be a good choice given the requirements?
I'm currently evaluating Celery vs IPython parallel as well. Regarding a central hub to keep track of what the workers are doing, have you checked out the Celery Flower project here? It provides a webpage that allows you to view the status of all tasks in the queue.

Jboss Service / Managed Bean Question

I have a managed bean / service running inside of JBOSS. I then have a quartz job that will occasionally wake up and call a method of the managed bean. This method is sometimes long and drawn out, and since I don't want the quartz job to time out, I have implemented a thread within the managed bean to perform the processing. When the thread is finished I need to update a database table with the results. This is a very serial process and it needs to be based upon some business rules, etc.
My main question is that I can use an EntityManager within the service without a problem however I can not use it from within the thread, I get a NullPointerException. What would be the best way to address this?
Thanks,
Scott
As creating threads in appservers is discouraged, I'd modify the setup a bit.
I'd move the core of processing to a message driven bean, and have the Quartz job just send a message to the queue on which the MDB is listening. The MDB in turn can call your EJB, and like this everything remains within what's allowed by the standard.
As per the documentation and specification the Entity Manager is not thread safe and can not be used across different child threads as I had originally had in mind. I ended up going back to the original design similar to the one provided by fvu, however I found some annotations that would allow me to modify the been timeout period and allow the long running process to work properly. Here's the annotation that I used:
#PoolClass(value=org.jboss.ejb3.StrictMaxPool.class, timeout=360000000L)