Is there any way to interrupt a parallel collection computation in Scala?
Example:
val r = new Runnable {
override def run(): Unit = {
(1 to 3).par.foreach { _ => Thread.sleep(5000000) }
}
}
val t = new Thread(r)
t.start()
Thread.sleep(300) // let them spin up
t.interrupt()
I'd expect t.interrupt to interrupt all threads spawned by par, but this is not happening, it keeps spinning inside ForkJoinTask.externalAwaitDone. Looks like that method clears the interrupted status and keeps waiting for the spawned threads to finish.
This is Scala 2.12
The thread that you t.start() is responsible just for starting parallel computations and to wait and gather the result.
It is not connected to threads that compute operations. Usually, it runs on default forkJoinPool that independent from the thread that submits computation tasks.
If you want to interrupt the computation, you can use custom execution back-end (like manually created forkJoinPool or a threadPool), and then shut it down. You can read about that here.
Or you can provide a callback from the computation.
But all those approaches are not so good for such a case.
If you producing a production solution or your case is complex and critical for the app, you probably should use something that has cancellation by design. Like Monix.Task or CancellableFuture.
Or at least use Future and cancel it with workarounds.
Related
From this tutorial https://github.com/slouc/concurrency-in-scala-with-ce#threading
async operations are divided into 3 groups and require significantly different thread pools to run on:
Non-blocking asynchronous operations:
Bounded pool with a very low number of threads (maybe even just one), with a very high priority. These threads will basically just sit idle most of the time and keep polling whether there is a new async IO notification. Time that these threads spend processing a request directly maps into application latency, so it's very important that no other work gets done in this pool apart from receiving notifications and forwarding them to the rest of the application.
Bounded pool with a very low number of threads (maybe even just one), with a very high priority. These threads will basically just sit idle most of the time and keep polling whether there is a new async IO notification. Time that these threads spend processing a request directly maps into application latency, so it's very important that no other work gets done in this pool apart from receiving notifications and forwarding them to the rest of the application.
Blocking asynchronous operations:
Unbounded cached pool. Unbounded because blocking operation can (and will) block a thread for some time, and we want to be able to serve other I/O requests in the meantime. Cached because we could run out of memory by creating too many threads, so it’s important to reuse existing threads.
CPU-heavy operations:
Fixed pool in which number of threads equals the number of CPU cores. This is pretty straightforward. Back in the day the "golden rule" was number of threads = number of CPU cores + 1, but "+1" was coming from the fact that one extra thread was always reserved for I/O (as explained above, we now have separate pools for that).
In my Cats Effect application, I use Scala Future-based ReactiveMongo lib to access MongoDB, which does NOT block threads when talking with MongoDB, e.g. performs non-blocking IO.
It needs execution context.
Cats effect provides default execution context IOApp.executionContext
My question is: which execution context should I use for non-blocking io?
IOApp.executionContext?
But, from IOApp.executionContext documemntation:
Provides a default ExecutionContext for the app.
The default on top of the JVM is lazily constructed as a fixed thread pool based on the number available of available CPUs (see PoolUtils).
Seems like this execution context falls into 3rd group I listed above - CPU-heavy operations (Fixed pool in which number of threads equals the number of CPU cores.),
and it makes me think that IOApp.executionContext is not a good candidate for non-blocking IO.
Am I right and should I create a separate context with a fixed thread pool (1 or 2 threads) for non-blocking IO (so it will fall into the first group I listed above - Non-blocking asynchronous operations: Bounded pool with a very low number of threads (maybe even just one), with a very high priority.)?
Or is IOApp.executionContext designed for both CPU-bound and Non-Blocking IO operations?
The function I use to convert Scala Future to F and excepts execution context:
def scalaFutureToF[F[_]: Async, A](
future: => Future[A]
)(implicit ec: ExecutionContext): F[A] =
Async[F].async { cb =>
future.onComplete {
case Success(value) => cb(Right(value))
case Failure(exception) => cb(Left(exception))
}
}
In Cats Effect 3, each IOApp has a Runtime:
final class IORuntime private[effect] (
val compute: ExecutionContext,
private[effect] val blocking: ExecutionContext,
val scheduler: Scheduler,
val shutdown: () => Unit,
val config: IORuntimeConfig,
private[effect] val fiberErrorCbs: FiberErrorHashtable = new FiberErrorHashtable(16)
)
You will almost always want to keep the default values and not fiddle around declaring your own runtime, except in perhaps tests or educational examples.
Inside your IOApp you can then access the compute pool via:
runtime.compute
If you want to execute a blocking operation, then you can use the blocking construct:
blocking(IO(println("foo!"))) >> IO.unit
This way, you're telling the CE3 runtime that this operation could be blocking and hence should be dispatched to a dedicated pool. See here.
What about CE2? Well, it had similar mechanisms but they were very clunky and also contained quite a few surprises. Blocking calls, for example, were scheduled using Blocker which then had to be somehow summoned out of thin air or threaded through the whole app, and thread pool definitions were done using the awkward ContextShift. If you have any choice in the matter, I highly recommend investing some effort into migrating to CE3.
Fine, but what about Reactive Mongo?
ReactiveMongo uses Netty (which is based on Java NIO API). And Netty has its own thread pool. This is changed in Netty 5 (see here), but ReactiveMongo seems to still be on Netty 4 (see here).
However, the ExecutionContext you're asking about is the thread pool that will perform the callback. This can be your compute pool.
Let's see some code. First, your translation method. I just changed async to async_ because I'm using CE3, and I added the thread printline:
def scalaFutureToF[F[_]: Async, A](future: => Future[A])(implicit ec: ExecutionContext): F[A] =
Async[F].async_ { cb =>
future.onComplete {
case Success(value) => {
println(s"Inside Callback: [${Thread.currentThread.getName}]")
cb(Right(value))
}
case Failure(exception) => cb(Left(exception))
}
}
Now let's pretend we have two execution contexts - one from our IOApp and another one that's going to represent whatever ReactiveMongo uses to run the Future. This is the made-up ReactiveMongo one:
val reactiveMongoContext: ExecutionContext =
ExecutionContext.fromExecutor(Executors.newFixedThreadPool(1))
and the other one is simply runtime.compute.
Now let's define the Future like this:
def myFuture: Future[Unit] = Future {
println(s"Inside Future: [${Thread.currentThread.getName}]")
}(reactiveMongoContext)
Note how we are pretending that this Future runs inside ReactiveMongo by passing the reactiveMongoContext to it.
Finally, let's run the app:
override def run: IO[Unit] = {
val myContext: ExecutionContext = runtime.compute
scalaFutureToF(myFuture)(implicitly[Async[IO]], myContext)
}
Here's the output:
Inside Future: [pool-1-thread-1]
Inside Callback: [io-compute-6]
The execution context we provided to scalaFutureToF merely ran the callback. Future itself ran on our separate thread pool that represents ReactiveMongo's pool. In reality, you will have no control over this pool, as it's coming from within ReactiveMongo.
Extra info
By the way, if you're not working with the type class hierarchy (F), but with IO values directly, then you could use this simplified method:
def scalaFutureToIo[A](future: => Future[A]): IO[A] =
IO.fromFuture(IO(future))
See how this one doesn't even require you to pass an ExecutionContext - it simply uses your compute pool. Or more specifically, it uses whatever is defined as def executionContext: F[ExecutionContext] for the Async[IO], which turns out to be the compute pool. Let's check:
override def run: IO[Unit] = {
IO.executionContext.map(ec => println(ec == runtime.compute))
}
// prints true
Last, but not least:
If we really had a way of specifying which thread pool ReactiveMongo's underlying Netty should be using, then yes, in that case we should definitely use a separate pool. We should never be providing our runtime.compute pool to other runtimes.
For few days I have been wrapping my head around cats-effect and IO. And I feel I have some misconceptions about this effect or simply I missed its point.
First of all - if IO can replace Scala's Future, how can we create an async IO task? Using IO.shift? Using IO.async? Is IO.delay sync or async? Can we make a generic async task with code like this Async[F].delay(...)? Or async happens when we call IO with unsafeToAsync or unsafeToFuture?
What's the point of Async and Concurrent in cats-effect? Why they are separated?
Is IO a green thread? If yes, why is there a Fiber object in cats-effect? As I understand the Fiber is the green thread, but docs claim we can think of IOs as green threads.
I would appreciate some clarifing on any of this as I have failed comprehending cats-effect docs on those and internet was not that helpfull...
if IO can replace Scala's Future, how can we create an async IO task
First, we need to clarify what is meant as an async task. Usually async means "does not block the OS thread", but since you're mentioning Future, it's a bit blurry. Say, if I wrote:
Future { (1 to 1000000).foreach(println) }
it would not be async, as it's a blocking loop and blocking output, but it would potentially execute on a different OS thread, as managed by an implicit ExecutionContext. The equivalent cats-effect code would be:
for {
_ <- IO.shift
_ <- IO.delay { (1 to 1000000).foreach(println) }
} yield ()
(it's not the shorter version)
So,
IO.shift is used to maybe change thread / thread pool. Future does it on every operation, but it's not free performance-wise.
IO.delay { ... } (a.k.a. IO { ... }) does NOT make anything async and does NOT switch threads. It's used to create simple IO values from synchronous side-effecting APIs
Now, let's get back to true async. The thing to understand here is this:
Every async computation can be represented as a function taking callback.
Whether you're using API that returns Future or Java's CompletableFuture, or something like NIO CompletionHandler, it all can be converted to callbacks. This is what IO.async is for: you can convert any function taking callback to an IO. And in case like:
for {
_ <- IO.async { ... }
_ <- IO(println("Done"))
} yield ()
Done will be only printed when (and if) the computation in ... calls back. You can think of it as blocking the green thread, but not OS thread.
So,
IO.async is for converting any already asynchronous computation to IO.
IO.delay is for converting any completely synchronous computation to IO.
The code with truly asynchronous computations behaves like it's blocking a green thread.
The closest analogy when working with Futures is creating a scala.concurrent.Promise and returning p.future.
Or async happens when we call IO with unsafeToAsync or unsafeToFuture?
Sort of. With IO, nothing happens unless you call one of these (or use IOApp). But IO does not guarantee that you would execute on a different OS thread or even asynchronously unless you asked for this explicitly with IO.shift or IO.async.
You can guarantee thread switching any time with e.g. (IO.shift *> myIO).unsafeRunAsyncAndForget(). This is possible exactly because myIO would not be executed until asked for it, whether you have it as val myIO or def myIO.
You cannot magically transform blocking operations into non-blocking, however. That's not possible neither with Future nor with IO.
What's the point of Async and Concurrent in cats-effect? Why they are separated?
Async and Concurrent (and Sync) are type classes. They are designed so that programmers can avoid being locked to cats.effect.IO and can give you API that supports whatever you choose instead, such as monix Task or Scalaz 8 ZIO, or even monad transformer type such as OptionT[Task, *something*]. Libraries like fs2, monix and http4s make use of them to give you more choice of what to use them with.
Concurrent adds extra things on top of Async, most important of them being .cancelable and .start. These do not have a direct analogy with Future, since that does not support cancellation at all.
.cancelable is a version of .async that allows you to also specify some logic to cancel the operation you're wrapping. A common example is network requests - if you're not interested in results anymore, you can just abort them without waiting for server response and don't waste any sockets or processing time on reading the response. You might never use it directly, but it has it's place.
But what good are cancelable operations if you can't cancel them? Key observation here is that you cannot cancel an operation from within itself. Somebody else has to make that decision, and that would happen concurrently with the operation itself (which is where the type class gets its name). That's where .start comes in. In short,
.start is an explicit fork of a green thread.
Doing someIO.start is akin to doing val t = new Thread(someRunnable); t.start(), except it's green now. And Fiber is essentially a stripped down version of Thread API: you can do .join, which is like Thread#join(), but it does not block OS thread; and .cancel, which is safe version of .interrupt().
Note that there are other ways to fork green threads. For example, doing parallel operations:
val ids: List[Int] = List.range(1, 1000)
def processId(id: Int): IO[Unit] = ???
val processAll: IO[Unit] = ids.parTraverse_(processId)
will fork processing all IDs to green threads and then join them all. Or using .race:
val fetchFromS3: IO[String] = ???
val fetchFromOtherNode: IO[String] = ???
val fetchWhateverIsFaster = IO.race(fetchFromS3, fetchFromOtherNode).map(_.merge)
will execute fetches in parallel, give you first result completed and automatically cancel the fetch that is slower. So, doing .start and using Fiber is not the only way to fork more green threads, just the most explicit one. And that answers:
Is IO a green thread? If yes, why is there a Fiber object in cats-effect? As I understand the Fiber is the green thread, but docs claim we can think of IOs as green threads.
IO is like a green thread, meaning you can have lots of them running in parallel without overhead of OS threads, and the code in for-comprehension behaves as if it was blocking for the result to be computed.
Fiber is a tool for controlling green threads explicitly forked (waiting for completion or cancelling).
I have a scala program that runs for a while and then terminates. I'd like to provide a library to this program that, behind the scenes, schedules an asynchronous task to run every N seconds. I'd also like the program to terminate when the main entrypoint's work is finished without needing to explicitly tell the background work to shut down (since it's inside a library).
As best I can tell the idiomatic way to do polling or scheduled work in Scala is with Akka's ActorSystem.scheduler.schedule, but using an ActorSystem makes the program hang after main waiting for the actors. I then tried and failed to add another actor that joins on the main thread, seemingly because "Anything that blocks a thread is not advised within Akka"
I could introduce a custom dispatcher; I could kludge something together with a polling isAlive check, or adding a similar check inside each worker; or I could give up on Akka and just use raw Threads.
This seems like a not-too-unusual thing to want to do, so I'd like to use idiomatic Scala if there's a clear best way.
I don't think there is an idiomatic Scala way.
The JVM program terminates when all non-daemon thread are finished. So you can schedule your task to run on a daemon thread.
So just use Java functionality:
import java.util.concurrent._
object Main {
def main(args: Array[String]): Unit = {
// Make a ThreadFactory that creates daemon threads.
val threadFactory = new ThreadFactory() {
def newThread(r: Runnable) = {
val t = Executors.defaultThreadFactory().newThread(r)
t.setDaemon(true)
t
}
}
// Create a scheduled pool using this thread factory
val pool = Executors.newSingleThreadScheduledExecutor(threadFactory)
// Schedule some function to run every second after an initial delay of 0 seconds
// This assumes Scala 2.12. In 2.11 you'd have to create a `new Runnable` manually
// Note that scheduling will stop, if there is an exception thrown from the function
pool.scheduleAtFixedRate(() => println("run"), 0, 1, TimeUnit.SECONDS)
Thread.sleep(5000)
}
}
You can also use guava to create a daemon thread factory with new ThreadFactoryBuilder().setDaemon(true).build().
If you use Akka scheduler you will be relying on highly tuned and optimized implementation that is well tested. Bringing up an actor system is a bit heavy weight though, I agree. Additionally you have to bring in a dependency on akka. If you are ok with that you can explicitly call system.shutdown from main when you are done, or wrap it in a function that will do it for you.
Alternatively, you could try something along these lines:
import scala.concurrent._
import ExecutionContext.Implicits.global
object Main extends App {
def repeatEvery[T](timeoutMillis: Int)(f: => T): Future[T] = {
val p = Promise[T]()
val never = p.future
f
def timeout = Future {
Thread.sleep(timeoutMillis)
throw new TimeoutException
}
val failure = Future.firstCompletedOf(List(never, timeout))
failure.recoverWith { case _ => repeatEvery(timeoutMillis)(f) }
}
repeatEvery(1000) {
println("scheduled job called")
}
println("main started doing its work")
Thread.sleep(10000)
println("main finished")
}
Prints:
scheduled job called
main started doing its work
scheduled job called
scheduled job called
scheduled job called
scheduled job called
scheduled job called
scheduled job called
scheduled job called
scheduled job called
scheduled job called
main finished
I don't like that it uses Thread.sleep, but that is done to avoid using any other 3rd party schedulers and Scala Future does not provide timeout options. So you'll be wasting one thread on that scheduling task, but that's what Akka scheduler seems to do anyway. The difference is that perhaps you want a single scheduler for the whole JVM not to waste too many threads. The code I provided albeit simpler will waste a thread per job.
I would like to implement my asynchronous processing with
scalaz.concurrent.Task. I need a function (Task[A], Task[B]) => Task[(A, B)] to return a new task that works as follows:
run Task[A] and Task[B] in parallel and wait for the results;
if one of the tasks fails then cancel the second one and wait until it terminates;
return the results of both tasks.
How would you implement such a function ?
As I mention above, if you don't care about actually stopping the non-failed computation, you can use Nondeterminism. For example:
import scalaz._, scalaz.Scalaz._, scalaz.concurrent._
def pairFailSlow[A, B](a: Task[A], b: Task[B]): Task[(A, B)] = a.tuple(b)
def pairFailFast[A, B](a: Task[A], b: Task[B]): Task[(A, B)] =
Nondeterminism[Task].both(a, b)
val divByZero: Task[Int] = Task(1 / 0)
val waitALongTime: Task[String] = Task {
Thread.sleep(10000)
println("foo")
"foo"
}
And then:
pairFailSlow(divByZero, waitALongTime).run // fails immediately
pairFailSlow(waitALongTime, divByZero).run // hangs while sleeping
pairFailFast(divByZero, waitALongTime).run // fails immediately
pairFailFast(waitALongTime, divByZero).run // fails immediately
In every case except the first the side effect in waitALongTime will happen. If you wanted to attempt to stop that computation, you'd need to use something like Task's runAsyncInterruptibly.
There is a weird conception among java developers that you should not cancel parallel tasks. They comminate Thread.stop() and mark it deprecated. Without Thread.stop() you could not really cancel future. All you could do is to send some signal to future, or modify some shared variable and make code inside future to check it periodically. So, all libraries that provides futures could suggest the only way to cancel future: do it cooperatively.
I'm facing the same problem now and is in the middle of writing my own library for futures that could be cancelled. There are some difficulties but they may be solved. You just could not call Thread.stop() in any arbitrary position. The thread may perform updating shared variables. Lock would be recalled normally, but update may be stopped half-way, e.g. updating only half of double value and so on. So I'm introducing some lock. If the thread is in guarded state, then it should be now killed by Thread.stop() but with sending specific message. The guarded state is considered always very fast to be waited for. All other time, in the middle of computation, thread may be safely stopped and replaced with new one.
So, the answer is that: you should not desire to cancel futures, otherwise you are heretic and no one in java community would lend you a willing hand. You should define your own executional context that could kill threads and you should write your own futures library to run upon this context
If you want to execute long running computations concurrently (on a single machine), Akka actors can help.
One approach is to spawn a new actor for each piece of work. Something like
while(true) {
val actor = system.actorOf(Props[ProcessingActor])
(actor ? msg).map {
...
system.stop(actor)
}
}
A second idea is to configure a set number of actors behind a router. And then send all messages to the router.
val router = system.actorOf(Props[ProcessingActor].withRouter(RoundRobinRouter(nrOfInstances = 5)))
while(true) {
(router ? msg).map { ... }
}
I wonder, which is better if the system is overloaded (rate of incoming messages is higher than processing rate)?
Which will last longer? And will both eventually blow up the system with an OOMError?
Before you create a new Actor for each task you could also just use a Future. It really depends on what you want to achieve. To get as much work done with the least memory usage, you should use the actor/router approach. Futures are more expensive, because for each task would create a new instance of Future and Promise. But it really depends on your use case, which approach is the better. I just wouldn't create a lot of actors, when there really is no need for them. Especially as system.actorOf always creates a new error kernel.