SimpleTimeLimiter shutdown - guava

I am using com.google.common.util.concurrent.SimpleTimeLimiter,
wondering if this takes care of threadpool shutdown ? I am constructing this using noargs constructor new SimpleTimeLimiter() , but this does not provide a way to call shutdown().

Notice the JavaDoc of ExecutorService.shutdown()
Initiates an orderly shutdown in which previously submitted tasks are
executed, but no new tasks will be accepted. Invocation has no
additional effect if already shut down.
The backing ExecutorService.submit(callable) will be called ONCE inside of SimpleTimeLimiter. Since NO other new tasks will be submitted any more, so shutdown() is not needed.
But if we use constructor SimpleTimeLimiter(ExecutorService executor), then we have to be responsible for shutdown().

Related

Unbind at System.exit(0)

I'm working with Java RMI. The problem is that, by closing a thread, or call System.exit(0), I need the object registered with the RMI registry to execute an unbind() to remove all associations with the object. When we perform System.exit(0), the object is already registered with the RMI registry.
How I can do so by calling System.exit(0) the unbind() is made of the object in particular? I had thought about making a System.exit() override , but apparently that's not the solution.
The problem is that, by closing a thread, or call System.exit(0), I need the object registered with the RMI registry to execute an unbind() to remove all associations with the object.
So do that. But there is no such thing as 'closing a thread', and even exiting a thread doesn't require you to unbind anything.
When we perform System.exit(0), the object is already registered with the RMI registry.
Good, so the unbind() will succeed. Not sure what point is being made her. Did you mean 'still registered'?
How I can do so by calling System.exit(0) the unbind() is made of the object in particular?
You can't. You have to precede the System.exit() call with an unbind() call.
I had thought about making a System.exit() override , but apparently that's not the solution.
You can't override static methods, and System is final.
It seems you may have System.exit() spattered all over the place, which is already poor practice.
The simple answer is not to call System.exit() at all, but to unbind and unexport the object instead. Then the RMI threads will exit and your JVM will exit of its own accord, as long as you don't have any non-daemon threads of your own.

Execution context without daemon threads for futures

I am having trouble with the JVM immediately exiting using various new applications I wrote which spawn threads through the Scala 2.10 Futures + Promises framework.
It seems that at least with the default execution context, even if I'm using blocking, e.g.
future { blocking { /* work */ }}
no non-daemon thread is launched, and therefore the JVM thinks it can immediately quit.
A stupid work around is to launch a dummy Thread instance which is just waiting, but then I also need to make sure that this thread stops when the processes are done.
So how to I enforce them to run on non-daemon threads?
In looking at the default ExecutionContext attached to ExecutionContext.global, it's of the fork join variety and the Threadfactory it uses sets the threads to daemon. If you want to work around this, you could use a different ExecutionContext, one you set up yourself. If you still want the FJP variety (and you probably do as it scales the best), you should be able to look at what they are doing in ExecutionContextImpl via this link and create something similar. Or just use a cached thread pool via Executors.newCachedThreadPool as that won't shut down immediately before your futures complete.
spawn processes
If this means processes and not just tasks, then scala.sys.process spawns non-daemon threads to run OS processes.
Otherwise, if you're creating a bunch of tasks, this is what Future.sequence helps with. Then just Await ready (Future sequence List(futures)) on the main thread.

In Scala, does Futures.awaitAll terminate the thread on timeout?

So I'm writing a mini timeout library in scala, it looks very similar to the code here: How do I get hold of exceptions thrown in a Scala Future?
The function I execute is either going to complete successfully, or block forever, so I need to make sure that on a timeout the executing thread is cancelled.
Thus my question is: On a timeout, does awaitAll terminate the underlying actor, or just let it keep running forever?
One alternative that I'm considering is to use the java Future library to do this as there is an explicit cancel() method one can call.
[Disclaimer - I'm new to Scala actors myself]
As I read it, scala.actors.Futures.awaitAll waits until the list of futures are all resolved OR until the timeout. It will not Future.cancel, Thread.interrupt, or otherwise attempt to terminate a Future; you get to come back later and wait some more.
The Future.cancel may be suitable, however be aware that your code may need to participate in effecting the cancel operation - it doesn't necessarily come for free. Future.cancel cancels a task that is scheduled, but not yet started. It interrupts a running thread [setting a flag that can be checked]... which may or may not acknowledge the interrupt. Review Thread.interrupt and Thread.isInterrupted(). Your long-running task would normally check to see if it's being interrupted (your code), and self-terminate. Various methods (i.e. Thread.sleep, Object.wait and others) respond to the interrupt by throwing InterruptedException. You need to review & understand that mechanism to ensure your code will meet your needs within those constraints. See this.

How can i wait for a process tree so finished?

I have some updating to do when my application comes from the background, but some updates are dependent on a certain function that although executes first, it finishes after the other update methods(it calls a bunch of chained functions).
How can i ensure that a function tree is finished so that i may then execute the rest of the code?
Have you looked at NSOperationQueue? It enables you to specify dependencies among NSOperations so that you can rely on certain execution orders to be followed.
This might work, with wait untill done flag set to YES. Give it a shot.
(void)performSelector:(SEL)aSelector
onThread:(NSThread *)thr withObject:(id)arg waitUntilDone:(BOOL)wait;
Apple doc says that setting the waitUntillDone on YES will stop the current thread untill your selector has finished its execution.
wait - A Boolean that specifies whether the current thread blocks until
after the specified selector is performed on the receiver on the
specified thread. Specify YES to block this thread; otherwise, specify
NO to have this method return immediately. If the current thread and
target thread are the same, and you specify YES for this parameter,
the selector is performed immediately on the current thread. If you
specify NO, this method queues the message on the thread’s run loop
and returns, just like it does for other threads. The current thread
must then dequeue and process the message when it has an opportunity
to do so.
Let me know if it worked.

Suspending the workflow instance in the Fault Handler

I want to implement a solution in my workflows that will do the following :
In the workflow level I want to implement a Fault Handler that will suspend the workflow for any exception.
Then at sometime the instance will get a Resume() command .
What I want to implement that when the Resume() command received , the instance will execute again the activity that failed earlier ( and caused the exception ) and then continue to execute whatever he has to do .
What is my problem :
When suspended and then resumed inside the Fault Handler , the instance is just completes. The resume of course doesn't make the instance to return back to the execution,
since that in the Fault Handler , after the Suspend activity - I have nothing. So
obviously the execution of the workflow ends there.
I DO want to implement the Fault Handler in the workflow level and not to use a While+Sequence activity to wrap each activity in the workflow ( like described here:
Error Handling In Workflows ) since with my pretty heavy workflows - this will look like a hell.
It should be kinda generic handling..
Do you have any ideas ??
Thanks .
If you're working on State Machine Workflows, my technique for dealing with errors that require human intervention to fix is creating an additional 'stateactivity' node that indicates an 'error' state, something like STATE_FAULTED. Then every state has a faulthandler that catches any exception, logs the exception and changes state to STATE_FAULTED, passing information like current activity, type of exception raised and any other context info you might need.
In the STATE_FAULTED initialization you can listen for an external command (your Resume() command, or whatever suits your needs), and when everything is OK you can just switch to the previous state and resume execution.
I am afraid that isn't going to work. Error handling in a workflow is similar to a Try/Catch block and the only way to retry is to wrap everything is a loop and just execute the loop again if something was wrong.
Depending on the sort of error you are trying to deal with you might be able to achieve your goal by creating custom activities that wrap their own execution logic in a Try/Catch and contain the required retry logic.