Suspending the workflow instance in the Fault Handler - workflow

I want to implement a solution in my workflows that will do the following :
In the workflow level I want to implement a Fault Handler that will suspend the workflow for any exception.
Then at sometime the instance will get a Resume() command .
What I want to implement that when the Resume() command received , the instance will execute again the activity that failed earlier ( and caused the exception ) and then continue to execute whatever he has to do .
What is my problem :
When suspended and then resumed inside the Fault Handler , the instance is just completes. The resume of course doesn't make the instance to return back to the execution,
since that in the Fault Handler , after the Suspend activity - I have nothing. So
obviously the execution of the workflow ends there.
I DO want to implement the Fault Handler in the workflow level and not to use a While+Sequence activity to wrap each activity in the workflow ( like described here:
Error Handling In Workflows ) since with my pretty heavy workflows - this will look like a hell.
It should be kinda generic handling..
Do you have any ideas ??
Thanks .

If you're working on State Machine Workflows, my technique for dealing with errors that require human intervention to fix is creating an additional 'stateactivity' node that indicates an 'error' state, something like STATE_FAULTED. Then every state has a faulthandler that catches any exception, logs the exception and changes state to STATE_FAULTED, passing information like current activity, type of exception raised and any other context info you might need.
In the STATE_FAULTED initialization you can listen for an external command (your Resume() command, or whatever suits your needs), and when everything is OK you can just switch to the previous state and resume execution.

I am afraid that isn't going to work. Error handling in a workflow is similar to a Try/Catch block and the only way to retry is to wrap everything is a loop and just execute the loop again if something was wrong.
Depending on the sort of error you are trying to deal with you might be able to achieve your goal by creating custom activities that wrap their own execution logic in a Try/Catch and contain the required retry logic.

Related

How to ensure the order of user callback function in OpenCL?

I am working on OpenCL implementation wherein the host side particular function has to call every time the clEnqueueReadBuffer is done executing.
I am calling the kernels in a loop. It will look like below in an ordered queue.
clEnqueueNDRangeKernel() -> clEnqueueReadBuffer(&Event) ->
clEnqueueNDRangeKernel() -> clEnqueueReadBuffer(&Event) .......
I have used clSetEventCall() to register Events in each read command to execute a callback function. I have observed that, though the command queue is an in-order queue, the order of the callback function does not execute in-order.
Also, in OpenCL 1.2, it has a mention as below.
The order in which the registered user callback functions are called
is undefined. There is no guarantee that the callback functions
registered for various execution status values for an event will be
called in the exact order that the execution status of a command
changes.
Can anyone give me a solution? I want to execute the callback function in order.
A simple solution could be to subscribe the same callback function to both events. In the callback code, you can check the status of each relevant event and perform the operation you want accordingly.
Note that on some implementations, the driver will batch multiple commands for execution.
The immediate effect is that that multiple events will be signaled "at once" even though the associated commands complete at a different time.
// event1 & event2 are likely to be signaled at once:
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event1);
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event2);
Wheres:
// event1 is likely to be signaled before event2:
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event1);
clflush(queue);
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event2);
clflush(queue);
I would also check on which exact thread the callbacks are invoked.
Is it the same thread each time? or a different one? If the implementation opens a new thread for this task, it might be wiser to open a single thread yourself and wait for events in the order that you wish.

system calls and the internal flow

Trying to understand the exact sequence of events in terms of flow in case of Linux:
User application program makes a call to a system call.
This results in execution of code which results in triggering of a software exception (which code is this? It's understandable if I called a glibc API, but what if my program is calling a system call directly). Is there some piece of code that regular system calls have which triggers this exception?
Once the exception is triggered -- user application processes context needs to be preserved and user application I presume will be in pend state(assuming its a waited system call that was invoked).
Exception handler then runs and picks suitable handler based on system call invoked(how is this system call number passed to exception handler from this exception triggering code?)
An exception handler runs(let's say a read call for example and has to return read data to use a process which triggered its invocation - this I assume would mean the exception triggering code has also passed a pointer to buffer to which read data needs to be copied over to)
Once the exception handler is done, the control now has to return to the application which would now be pending system call.
Does this flow look alright > Is there some sample code for each of these steps beyond user application?

protractor: what is the relationship between the control flow and javascript event loop?

I'm having a difficult time trying to understand how the control flow in protractor work in relation to how JS event loop works. Here is what I know so far:
Protractor control flow stores commands that return promises in a queue. The first command will be at the front of the queue and the last command will be at the back. No command will be executed until the command in front of it has its promise resolved.
JS event loop stores asynchronous task (callbacks to be specific). Callbacks are not executed until all functions in the stack have completed and the stack is empty. Before running each callback, there is a check on whether the stack is empty or not.
so lets take this code for example. The code is basically clicking a search button and a api request is made. Then after data is returned, it checks whether the field that stores the returned data exists.
elem('#searchButton').click(); //will execute a api call to retrieve data
browser.wait(ExpectedConditions.presenceOf(elem('#resultDataField'),3000));
expect(elem('#resultDataField').isPresent()).toBeTruthy();
So with this code, I'm able to get it to work. But I don't know how it does it. How is the event loop applied in this scenario?
The core of the ControlFlow implementation is in runEventLoop_ (in Selenium's promise.js implementation).
As I understand it, the ControlFlow registers a call to runEventLoop_ with the JS event loop (e.g., with a 0-second timeout or somesuch). The call to runEventLoop_ can be thought of as a single iteration of a normal event loop. It registers code to actually run a scheduled task (i.e., actually do the work you queued up during your it). Once that task completes or fails (e.g., by hooking its async promise callbacks) the next iteration of runEventLoop_ is scheduled (see the calls to scheduleEventLoop in runEventLoop_).
There is some complexity when a callback ends up registering new promises (those need to be "inserted" before the old next event, this is accomplished by creating a "nested" control flow. Mostly you should never have to know this.)

jBPM signal event does always complete work item

I've implemented a custom workitemhandler which I want to complete only by an external REST call. Therefore the items executeWorkItem() method does NOT call manager.completeWorkItem(workItem.getId(), results); at the end, which is perfectly ok. I've also assigned a signal event to this workitem in my process, which is also called by an external REST call. Both things work as expected, but what I do not understand is that every time I signal the work item, it also automatically completes the work item, which leads to the problem that the process continuous with its regular path AND the signaled one. But the reason for the signal is to interrupt the process to follow ONLY the signaled path path.
The process image to this can be found here http://cl.ly/image/0F3L3E2w2l0j. In this example I signaled the "Fail Transfer" but the rest gets also executed even nothing completed the workitem.
I'm using jBPM 6.1 Final.
Thanks in advance for any help.
Nevermind, I found the reason for this behavior. The custom work item handler implemented
public void abortWorkItem(WorkItem workItem, WorkItemManager manager) {
manager.abortWorkItem(workItem.getId());
}
After removing manager.abortWorkItem(workItem.getId());, the process behaves as expected.

In Scala, does Futures.awaitAll terminate the thread on timeout?

So I'm writing a mini timeout library in scala, it looks very similar to the code here: How do I get hold of exceptions thrown in a Scala Future?
The function I execute is either going to complete successfully, or block forever, so I need to make sure that on a timeout the executing thread is cancelled.
Thus my question is: On a timeout, does awaitAll terminate the underlying actor, or just let it keep running forever?
One alternative that I'm considering is to use the java Future library to do this as there is an explicit cancel() method one can call.
[Disclaimer - I'm new to Scala actors myself]
As I read it, scala.actors.Futures.awaitAll waits until the list of futures are all resolved OR until the timeout. It will not Future.cancel, Thread.interrupt, or otherwise attempt to terminate a Future; you get to come back later and wait some more.
The Future.cancel may be suitable, however be aware that your code may need to participate in effecting the cancel operation - it doesn't necessarily come for free. Future.cancel cancels a task that is scheduled, but not yet started. It interrupts a running thread [setting a flag that can be checked]... which may or may not acknowledge the interrupt. Review Thread.interrupt and Thread.isInterrupted(). Your long-running task would normally check to see if it's being interrupted (your code), and self-terminate. Various methods (i.e. Thread.sleep, Object.wait and others) respond to the interrupt by throwing InterruptedException. You need to review & understand that mechanism to ensure your code will meet your needs within those constraints. See this.