I am using parallel collections, and when my application terminates, sbt issues:
Not interrupting system thread Thread[process reaper,10,system]
It issues this message one time per core (minus one to be precise).
I have seen in sbt code that this is by design, but I am not sure why don't the threads terminate along with my application. Any insight would be appreciated if you were unlucky enough to come across the same...
Parallel collections by default are backed by ForkJoinTasks.defaultForkJoinPool, which is a lazy val, so it's created the first time it's used.
Like any ForkJoinPool, it runs until explicitly shut down. The pool has no way of knowing whether it's going to receive any new tasks, and thread creation is relatively expensive, so it would be wasteful for the pool to shut down when it was empty only to start up again as soon as new tasks are added. So its threads hang around unless and until the pool is explicitly shut down.
As a design decision the JVM doesn't kill other threads just because the main thread terminates; in some programming styles the main thread terminates relatively early (e.g. think about web servers where the main thread sets up everything, starts a pool of dispatcher threads, and then exits, but the web server continues to run indefinitely).
You could call ForkJoinTasks.defaultForkJoinPool.shutdown() once you know you're not going to do any more parallel operations, or you could create parallel collections using a custom pool that's explicitly controlled from your code.
Related
When a MATLAB parallel pool is started for the first time it typically takes a few seconds. In a user-interactive application there is hence an incentive to make sure there is a parallel pool running before the first demand for computational tasks arrives, so the process of starting a parallel pool isn't added to the total time to respond to the request.
However every programmatic action such as parpool that I've seen that will start a parallel pool blocks execution until the pool is done starting up. This means even if the user has no need to call upon the parallel pool for some time, they cannot do anything else like begin setting up their computationally expensive request – filling in a user interface for instance – until the parallel pool is done starting.
This is very frustrating! If it was any other time-consuming preparatory action, once a parallel pool was in place it could be done in the background using parfeval and wouldn't obstruct the user's workflow until any request that actually called upon the completion of that preparation. But because this task actually addresses the lack of a running parallel pool, it seems users must wait for something they may not actually need to use until long after the task is complete.
Is there any way around this apparent limitation on usability?
There is currently no way to launch a parallel pool in the background. There are a couple of potential mitigations that might help:
Don't ever explicitly call parpool - just let the auto-creation of the pool only start creating the pool when you hit a parallel language construct such as parfor, parfeval, or spmd.
If you're using a cluster which might not be able to service your request for workers for some time, you could use batch to launch the whole computation in the background. (This is probably only appropriate if you've got a fairly long-running computation).
Should we use a thread pool for long running threads or start our own threads? Is there some design pattern?
Unfortunately, it depends. There is no hard and fast rule saying that you should always employ thread pools.
Thread pools offer two main things:
Delegated creation/reuse of threads.
Back-pressure
IMO, it's the back-pressure property that's interesting, but often the most poorly understood. Your machine runs on a limited set of resources. If you have (say) 8 CPU cores and they are all busy working, you would like to signal that in some way that adding more work (submitting more tasks) isn't going to help, at least not in terms of latency.
This is the reason java.util.concurrent.ExecutorService implementations allow you to specify a java.util.concurrent.BlockingQueue of your choice. When this queue grows full, invoking threads will block until the thread pool has managed to complete tasks in progress.
Whether or not to have long-running threads inside the thread pool depends on what it's doing. If the thread is constantly busy (meaning it will never complete) then it will always occupy a slot in the thread pool, which is kind of pointless.
Regarding delegated creation/reuse of threads; maybe you could have two pools, one for long-running tasks and one for other tasks. Or perhaps a long-running thread pool with one single slot, this will prevent two long-running tasks from running at the same time, provided that is what you want.
As you can see, there is no single good answer. It really boils down to what you are trying to achieve and how you want to use the resources at hand.
With reference to the third point in this accepted answer, are there any cases for which it would be pointless or bad to use blocking for a long-running computation, whether CPU- or IO-bound, that is being executed 'within' a Future?
It depends on the ExecutionContext your Future is being executed in.
Pointless:
If the ExecutionContext is not a BlockContext, then using blocking will be pointless. That is, it would use the DefaultBlockContext, which simply executes the code without any special handling. It probably wouldn't add that much overhead, but pointless nonetheless.
Bad:
Scala's ExecutionContext.Implicits.global is made to spawn new threads in a ForkJoinPool when the thread pool is about to be exhausted. That is, if it knows that is going to happen via blocking. This can be bad if you're spawning lots of threads. If you're queuing up a lot of work in a short span of time, the global context will happily expand until gridlock. #dk14's answer explains this in more depth, but the gist is that it can be a performance killer as managed blocking can actually become quickly unmanageable.
The main purpose of blocking is to avoid deadlocks within thread pools, so it is tangentially related to performance in the sense that reaching a deadlock would be worse than spawning a few more threads. However, it is definitely not a magical performance enhancer.
I've written more about blocking in particular in this answer.
From my practice, blocking + ForkJoinPool may lead to contionuous and uncontrollable creation of threads if you have a lot of messages to process and each one requires long blocking (which also means that it holds some memory during such). ForkJoinPool creates new thread to compensate the "managable blocked" one, regardless of MaxThreadCount; say hello to hundreds of threads in VisualVm. And it almost kills backpressure, as there is always a place for task in the pool's queue (if your backpressure is based on ThreadPoolExecutor's policies). Performance becomes killed by both new-thread-allocation and garbage collection.
So:
it's good when message rate is not much higher than 1/blocking_time as it allows you to use full power of threads. Some smart backpressure might help to slow down incoming messages.
It's pointless if a task actually uses your CPU during blocking{} (no locks), as it will just increase counts of threads more than count of real cores in system.
And bad for any other cases - you should use separate fixed thread-pool (and maybe polling) then.
P.S. blocking is hidden inside Await.result, so it's not always obvious. In our project someone just did such Await inside some underlying worker actor.
In a multi-threaded process,If one thread is busy on I/O will the entire process be blocked?
AFAIK, it totally depends on programmer that how they manage the threads inside the programs.
If another thread is there with no I/O, processor will never sit idle & start executing this thread. However, process in split threads such that one thread waits for the result of the other, the the entire process will be blocked.
Please comment if more information needs to be added.
Does there exist any other explaination?
If the process has only one thread, then yes.
If the process has multiple threads, then normally no if the operating system supports multithreading.
This question can also be addressed in terms of the underlying implementation of user threads. There are different models for multithreading models, in order to implement user threads they have to be mapped to a kernel thread:
Many-to-One: Many user threads to one kernel thread
One-to-One: Each user thread is assigned to a kernel thread.
Many-to-Many: Many user threads are split on different kernel threads.
In the many-to-one case, a single block-operation (system call) within the thread can block the whole process. This disadvantage is not present in the one-to-one model.
I'm just learning, and really liking, the Actor pattern. I'm using Scala right now, but I'm interested in the architectural style in general, as it's used in Scala, Erlang, Groovy, etc.
The case I'm thinking of is where I need to do things concurrently, such as, let's say "run a job".
With threading, I would create a thread pool and a blocking queue, and have each thread poll the blocking queue, and process jobs as they came in and out of the queue.
With actors, what's the best way to handle this? Does it make sense to create a pool of actors, and somehow send messages to them containing or the jobs? Maybe with a "coordinator" actor?
Note: An aspect of the case which I forgot to mention was: what if I want to constrain the number of jobs my app will process concurrently? Maybe with a config setting? I was thinking that a pool might make it easy to do this.
Thanks!
A pool is a mechanism you use when the cost of creating and tearing down a resource is high. In Erlang this is not the case so you should not maintain a pool.
You should spawn processes as you need them and destroy them when you have finished with them.
Sometimes, it makes sense to limit how many working processes you have operating concurrently on a large task list, as the task the process is spawned to complete involve resource allocations. At the very least processes use up memory, but they could also keep open files and/or sockets which tend to be limited to only thousands and fail miserably and unpredictable once you run out.
To have a pull-driven task pool, one can spawn N linked processes that ask for a task, and one hand them a function they can spawn_monitor. As soon as the monitored process has ended, they come back for the next task. Specific needs drive the details, but that is the outline of one approach.
The reason I would let each task spawn a new process is that processes do have some state and it is nice to start off a clean slate. It's a common fine-tuning to set the min-heap size of processes adjusted to minimize the number of GCs needed during its lifetime. It is also a very efficient garbage collection to free all memory for a process and start on a new one for the next task.
Does it feel weird to use twice the number of processes like that? It's a feeling you need to overcome in Erlang programming.
There is no best way for all cases. The decision depends on the number, duration, arrival, and required completion time of the jobs.
The most obvious difference between just spawning off actors, and using pools is that in the former case your jobs will be finished nearly at the same time, while in the latter case completion times will be spread in time. The average completion time will be the same though.
The advantage of using actors is the simplicity on coding, as it requires no extra handling. The trade-off is that your actors will be competing for your CPU cores. You will not be able to have more parallel jobs than CPU cores (or HT's, whatever), no matter what programming paradigm you use.
As an example, imagine that you need to execute 100'000 jobs, each taking one minute, and the results are due next month. You have four cores. Would you spawn off 100'000 actors having each compete over the resources for a month, or would you just queue your jobs up, and have execute four at a time?
As a counterexample, imagine a web server running on the same machine. If you have five requests, would you prefer to serve four users in T time, and one in 2T, or serve all five in 1.2T time ?