I hope this is not a stupid question or I'm missing something obvious. I'm following the Coursera parallel programming class, and in week 1 they have the following code to run tasks in parallel (may differ slightly, since I typed mine in):
object parallelism {
val forkJoinPool = new ForkJoinPool
abstract class TaskScheduler {
def schedule[T](body: => T): ForkJoinTask[T]
def parallel[A, B](taskA: => A, taskB: => B): (A, B) = {
val right = task {
taskB
}
val left = taskA
(left, right.join())
}
}
class DefaultTaskScheduler extends TaskScheduler {
def schedule[T](body: => T): ForkJoinTask[T] = {
val t = new RecursiveTask[T] {
def compute = body
}
Thread.currentThread match {
case wt: ForkJoinWorkerThread => t.fork()
case _ => forkJoinPool.execute(t)
}
t
}
}
val scheduler =
new DynamicVariable[TaskScheduler](new DefaultTaskScheduler)
def task[T](body: => T): ForkJoinTask[T] = {
scheduler.value.schedule(body)
}
def parallel[A, B](taskA: => A, taskB: => B): (A, B) = {
scheduler.value.parallel(taskA, taskB)
}
}
I wrote a unit test that goes soemthing like this:
test("Test two task parallelizer") {
val (r1, t1) = timed {
( sieveOfEratosthenes(100000),
sieveOfEratosthenes(100000))
}
val (r2, t2) = timed {
parallel (
sieveOfEratosthenes(100000),
sieveOfEratosthenes(100000)
)
}
assert(t2 < t1)
}
test("Test four task parallelizer") {
val (r1, t1) = timed {
(sieveOfEratosthenes(100000),
sieveOfEratosthenes(100000),
sieveOfEratosthenes(100000),
sieveOfEratosthenes(100000))
}
val (r2, t2) = timed {
parallel (
parallel (
sieveOfEratosthenes(100000),
sieveOfEratosthenes(100000)
),
parallel (
sieveOfEratosthenes(100000),
sieveOfEratosthenes(100000)
)
)
}
assert(t2 < t1)
}
On the first test, I get good savings (300ms down to 50ms) savings, but on the second test, I only get about 20ms savings, and if I run it often enough the time may actually increase and fail my test. (the second value in the tuple returned by "timed" is the time in milliseconds)
The test method is first version from here: https://rosettacode.org/wiki/Sieve_of_Eratosthenes#Scala
Can someone teach me what is going on in the second test? If it matters, I'm running on a single cpu, quad core i5. The number of threads I create doesn't seem to make a lot of difference for this particular test.
The implementation of sieveOfEratosthenes you chose is already parallel (it's using ParSet), so parallelizing it further won't help.
The speedup you see in the first test is probably JIT warmup.
Related
help me how to organize a directory scan on ZIO. This is my version, but it doesn't track all file creation events (miss some events).
object Main extends App {
val program = for {
stream <- ZIO.succeed(waitEvents)
_ <- stream.run(ZSink.foreach(k => putStrLn(k.map(e => (e.kind(), e.context())).mkString("\n"))))
} yield ()
val managedWatchService = ZManaged.make {
for {
watchService <- FileSystem.default.newWatchService
path = Path("c:/temp")
_ <- path.register(watchService,
StandardWatchEventKinds.ENTRY_CREATE,
StandardWatchEventKinds.ENTRY_DELETE
)
} yield watchService
}(_.close.orDie)
val lookKey = ZManaged.make {
managedWatchService.use(watchService => watchService.take)
}(_.reset)
val waitEvents = ZStream.fromEffect {
lookKey.use(key => key.pollEvents)
}.repeat(Schedule.forever)
override def run(args: List[String]): ZIO[zio.ZEnv, Nothing, ExitCode] =
program
.provideLayer(Console.live ++ Blocking.live ++ Clock.live)
.exitCode
}
Thank you for your advice.
You are forcing your WatchService to shutdown and recreate every time you poll for events. Since that probably involves some system handles it is likely fairly slow so you would probably missing file events that occur in between. More likely you want to produce the WatchService once and then poll it repeatedly. I would suggest something like this instead:
object Main extends App {
val managedWatchService = ZManaged.make {
for {
watchService <- FileSystem.default.newWatchService
path = Path("c:/temp")
_ <- path.register(watchService,
StandardWatchEventKinds.ENTRY_CREATE,
StandardWatchEventKinds.ENTRY_DELETE
)
} yield watchService
}(_.close.orDie)
// Convert ZManaged[R, E, ZStream[R, E, A]] into ZStream[R, E, A]
val waitEvents = ZStream.unwrapManaged(
managedWatchService.mapM(_.take).map { key =>
// Use simple effect composition instead of a managed for readability.
ZStream.repeatEffect(key.pollEvents <* key.reset)
// Optional: Flatten the `List` of values that is returned
.flattenIterables
}
)
val program = waitEvents
.map(e => (e.kind(), e.context()).toString)
.foreach(putStrLn).unit
override def run(args: List[String]): ZIO[zio.ZEnv, Nothing, ExitCode] =
program
.provideLayer(Console.live ++ Blocking.live ++ Clock.live)
.exitCode
}
Also as a side note, when using ZManaged, you probably don't want to do
ZManaged.make(otherManaged.use(doSomething))(tearDown)
because you will cause the finalizers to execute out of order. ZManaged can already handle the ordering of teardown just through normal flatMap composition.
otherManaged.flatMap { other => ZManaged.make(doSomething(other))(tearDown) }
// I have hundreds of tasks converting inputs into outputs, which should be persisted.
case class Op(i: Int)
case class Output(i: Int)
val inputs: Seq[Op] = ??? // Number of inputs is huge
def executeLongRunning(op: Op): Output = {
Thread.sleep(Random.nextInt(1000) + 1000) // I cannot predict which tasks will finish first
println("<==", op)
Output(op.i)
}
def executeSingleThreadedSave(outputs: Seq[Output]): Unit = {
synchronized { // Problem is, persisting output is itself a long-running process,
// which cannot be parallelized (internally uses blocking queue).
Thread.sleep(5000) // persist time is independent of outputs.size
println("==>", outputs) // Order of persisted records does not matter
}
}
// TODO: this needs to be implemented
def magicSaver(eventualOutputs: Seq[Future[Output]], saver: Seq[Output] => Unit): Unit = ???
val eventualOutputs: Seq[Future[Output]] = inputs.map((input: Op) => Future(executeLongRunning(input)))
magicSaver(eventualOutputs, executeSingleThreadedSave)
I could implement magicSaver to be:
def magicSaver(eventualOutputs: Seq[Future[Output]], saver: Seq[Output] => Unit): Unit = {
saver(Await.result(Future.sequence(eventualOutputs), Duration.Inf))
}
But this has major drawback that we're waiting for all inputs to get processed before we're starting to persisting outputs, which is not ideal from fault-tolerance standpoint.
Another implementation is:
def magicSaver(eventualOutputs: Seq[Future[Output]], saver: Seq[Output] => Unit): Unit = {
eventualOutputs.foreach(_.onSuccess { case output: Output => saver(Seq(output)) })
}
but this blows up execution time to inputs.size * 5secs (because of synchronized nature of, which is not acceptable.
I want a way to batch together already completed futures, when number of such futures reached some trade-off size (100, for example), but I'm not sure how to do that in clean manner without explicitly coding polling logic:
def magicSaver(eventualOutputs: Seq[Future[Output]], saver: Seq[Output] => Unit): Unit = {
def waitFor100CompletedFutures(eventualOutputs: Seq[Future[Output]]): (Seq[Output], Seq[Future[Output]]) = {
var completedCount: Int = 0
do {
completedCount = eventualOutputs.count(_.isCompleted)
Thread.sleep(100)
} while ((completedCount < 100) && (completedCount != eventualOutputs.size))
val (completed: Seq[Future[Output]], remaining: Seq[Future[Output]]) = eventualOutputs.partition(_.isCompleted)
(Await.result(Future.sequence(completed), Duration.Inf), remaining)
}
var completed: Seq[Output] = null
var remaining: Seq[Future[Output]] = eventualOutputs
do {
(completed: Seq[Output], remaining: Seq[Future[Output]]) = waitFor100CompletedFutures(remaining)
saver(completed)
} while (remaining.nonEmpty)
}
Any elegant solution I'm missing here?
I'm posting my solution here, for reference. It has the benefit that it avoids batching altogether, and invokes processOutput as soon as output becomes available, which is the best situation under constraints I've described.
def magicSaver[T, R](eventualOutputs: Seq[Future[T]],
processOutput: Seq[T] => R)(implicit ec: ExecutionContext): Seq[R] = {
logInfo(s"Size of outputs to save: ${eventualOutputs.size}")
var remaining: Seq[Future[T]] = eventualOutputs
val processorOutput: mutable.ListBuffer[R] = new mutable.ListBuffer[R]
do {
val (currentCompleted: Seq[Future[T]], currentRemaining: Seq[Future[T]]) = remaining.partition(_.isCompleted)
if (remaining.size == currentRemaining.size) {
Thread.sleep(100)
} else {
logInfo(s"Got ${currentCompleted.size} completed records, remaining ${currentRemaining.size}")
val completed = currentCompleted.map(Await.result(_, Duration.Zero))
processorOutput.append(processOutput(completed))
}
remaining = currentRemaining
} while (remaining.nonEmpty)
processorOutput
}
I have a program which consumes an infinite stream of data. Along the way I'd like to record some metrics, which form a monoid since they're just simple sums and averages. Periodically, I want to write out these metrics somewhere, clear them, and return to accumulating them. I have essentially:
object Foo {
type MetricsIO[A] = StateT[IO, MetricData, A]
def recordMetric(m: MetricData): MetricsIO[Unit] = {
StateT.modify(_.combine(m))
}
def sendMetrics: MetricsIO[Unit] = {
StateT.modifyF { s =>
val write: IO[Unit] = writeMetrics(s)
write.attempt.map {
case Left(_) => s
case Right(_) => Monoid[MetricData].empty
}
}
}
}
So most of the execution uses IO directly and lifts using StateT.liftF. And in certain situations, I include some calls to recordMetric. At the end of it I've got a stream:
val mainStream: Stream[MetricsIO, Bar] = ...
And I want to periodically, say every minute or so, dump the metrics, so I tried:
val scheduler: Scheduler = ...
val sendStream =
scheduler
.awakeEvery[MetricsIO](FiniteDuration(1, TimeUnit.Minutes))
.evalMap(_ => Foo.sendMetrics)
val result = mainStream.concurrently(sendStream).compile.drain
And then I do the usual top level program stuff of calling run with the start state and then calling unsafeRunSync.
The issue is, I only ever see empty metrics! I suspect it's something to with my monoid implicitly providing empty metrics to sendStream but I can't quite figure out why that should be or how to fix it. Maybe there's a way I can "interleave" these sendMetrics calls into the main stream instead?
Edit: here's a minimal complete runnable example:
import fs2._
import cats.implicits._
import cats.data._
import cats.effect._
import java.util.concurrent.Executors
import scala.concurrent.ExecutionContext
import scala.concurrent.duration._
val sec = Executors.newScheduledThreadPool(4)
implicit val ec = ExecutionContext.fromExecutorService(sec)
type F[A] = StateT[IO, List[String], A]
val slowInts = Stream.unfoldEval[F, Int, Int](1) { n =>
StateT(state => IO {
Thread.sleep(500)
val message = s"hello $n"
val newState = message :: state
val result = Some((n, n + 1))
(newState, result)
})
}
val ticks = Scheduler.fromScheduledExecutorService(sec).fixedDelay[F](FiniteDuration(1, SECONDS))
val slowIntsPeriodicallyClearedState = slowInts.either(ticks).evalMap[Int] {
case Left(n) => StateT.liftF(IO(n))
case Right(_) => StateT(state => IO {
println(state)
(List.empty, -1)
})
}
Now if I do:
slowInts.take(10).compile.drain.run(List.empty).unsafeRunSync
Then I get the expected result - the state properly accumulates into the output. But if I do:
slowIntsPeriodicallyClearedState.take(10).compile.drain.run(List.empty).unsafeRunSync
Then I see an empty list consistently printed out. I would have expected partial lists (approx. 2 elements) printed out.
StateT is not safe to use with effect types, because it's not safe in the face of concurrent access. Instead, consider using a Ref (from either fs2 or cats-effect, depending what version).
Something like this:
def slowInts(ref: Ref[IO, Int]) = Stream.unfoldEval[F, Int, Int](1) { n =>
val message = s"hello $n"
ref.modify(message :: _) *> IO {
Thread.sleep(500)
val result = Some((n, n + 1))
result
}
}
val ticks = Scheduler.fromScheduledExecutorService(sec).fixedDelay[IO](FiniteDuration(1, SECONDS))
def slowIntsPeriodicallyClearedState(ref: Ref[IO, Int] =
slowInts.either(ticks).evalMap[Int] {
case Left(n) => IO.pure(n)
case Right(_) =>
ref.modify(_ => Nil).flatMap { case Change(previous, now) =>
IO(println(now)).as(-1)
}
}
Suppose I have a simple application
object FutureApp extends App {
val executor = newFixedThreadPool(8)
implicit val executionContext = fromExecutorService(executor)
val start = currentTimeMillis()
/*
* In real application we don't know how long it could take to execute the
* future body
*/
val f0: Future[Int] = Future { Thread.sleep(2700); 0 }
val f1: Future[Int] = Future { Thread.sleep(5500); 1 }
val f2: Future[Int] = Future { Thread.sleep(1500); 2 }
val seq: Future[List[Int]] = Future.sequence(f0 :: f1 :: f2 :: Nil)
seq onComplete {
case Success(res) => println { "R:" + res }
case Failure(t) => println { "R:" + t.getMessage }
}
/*
* Instead of invoking the code below I want to shutdown the
* executor without specifying the biggest execution time (5500)
*/
if (!executionContext.awaitTermination(5500, MILLISECONDS)) {
val end = currentTimeMillis()
println { s"executionTime=${(end - start).toDouble / 1000}" }
executionContext.shutdownNow()
}
}
It executes the code in three future bodies (body: =>T). I want to combine the result of futures execution, and for that, I use Future.sequence function. There is only one issue I need to solve. I need to shutdown the executor with respect to biggest execution time, which I don't know. It could be 5 seconds or 10 minutes, etc.
How can I achieve that?
In your simple case:
Future.sequence(....).onComplete{_ => executionContext.shutdownNow()}
But if you have number of cyclic dependent computation and you need to know when they are finished - you could use lattice abstraction. It is presented in "Programming with Futures, Lattices, and Quiescence" by #philippkhaller.
We have some code which needs to run faster. Its already profiled so we would like to make use of multiple threads. Usually I would setup an in memory queue, and have a number of threads taking jobs of the queue and calculating the results. For the shared data I would use a ConcurrentHashMap or similar.
I don't really want to go down that route again. From what I have read using actors will result in cleaner code and if I use akka migrating to more than 1 jvm should be easier. Is that true?
However, I don't know how to think in actors so I am not sure where to start.
To give a better idea of the problem here is some sample code:
case class Trade(price:Double, volume:Int, stock:String) {
def value(priceCalculator:PriceCalculator) =
(priceCalculator.priceFor(stock)-> price)*volume
}
class PriceCalculator {
def priceFor(stock:String) = {
Thread.sleep(20)//a slow operation which can be cached
50.0
}
}
object ValueTrades {
def valueAll(trades:List[Trade],
priceCalculator:PriceCalculator):List[(Trade,Double)] = {
trades.map { trade => (trade,trade.value(priceCalculator)) }
}
def main(args:Array[String]) {
val trades = List(
Trade(30.5, 10, "Foo"),
Trade(30.5, 20, "Foo")
//usually much longer
)
val priceCalculator = new PriceCalculator
val values = valueAll(trades, priceCalculator)
}
}
I'd appreciate it if someone with experience using actors could suggest how this would map on to actors.
This is a complement to my comment on shared results for expensive calculations. Here it is:
import scala.actors._
import Actor._
import Futures._
case class PriceFor(stock: String) // Ask for result
// The following could be an "object" as well, if it's supposed to be singleton
class PriceCalculator extends Actor {
val map = new scala.collection.mutable.HashMap[String, Future[Double]]()
def act = loop {
react {
case PriceFor(stock) => reply(map getOrElseUpdate (stock, future {
Thread.sleep(2000) // a slow operation
50.0
}))
}
}
}
Here's an usage example:
scala> val pc = new PriceCalculator; pc.start
pc: PriceCalculator = PriceCalculator#141fe06
scala> class Test(stock: String) extends Actor {
| def act = {
| println(System.currentTimeMillis().toString+": Asking for stock "+stock)
| val f = (pc !? PriceFor(stock)).asInstanceOf[Future[Double]]
| println(System.currentTimeMillis().toString+": Got the future back")
| val res = f.apply() // this blocks until the result is ready
| println(System.currentTimeMillis().toString+": Value: "+res)
| }
| }
defined class Test
scala> List("abc", "def", "abc").map(new Test(_)).map(_.start)
1269310737461: Asking for stock abc
res37: List[scala.actors.Actor] = List(Test#6d888e, Test#1203c7f, Test#163d118)
1269310737461: Asking for stock abc
1269310737461: Asking for stock def
1269310737464: Got the future back
scala> 1269310737462: Got the future back
1269310737465: Got the future back
1269310739462: Value: 50.0
1269310739462: Value: 50.0
1269310739465: Value: 50.0
scala> new Test("abc").start // Should return instantly
1269310755364: Asking for stock abc
res38: scala.actors.Actor = Test#15b5b68
1269310755365: Got the future back
scala> 1269310755367: Value: 50.0
For simple parallelization, where I throw a bunch of work out to process and then wait for it all to come back, I tend to like to use a Futures pattern.
class ActorExample {
import actors._
import Actor._
class Worker(val id: Int) extends Actor {
def busywork(i0: Int, i1: Int) = {
var sum,i = i0
while (i < i1) {
i += 1
sum += 42*i
}
sum
}
def act() { loop { react {
case (i0:Int,i1:Int) => sender ! busywork(i0,i1)
case None => exit()
}}}
}
val workforce = (1 to 4).map(i => new Worker(i)).toList
def parallelFourSums = {
workforce.foreach(_.start())
val futures = workforce.map(w => w !! ((w.id,1000000000)) );
val computed = futures.map(f => f() match {
case i:Int => i
case _ => throw new IllegalArgumentException("I wanted an int!")
})
workforce.foreach(_ ! None)
computed
}
def serialFourSums = {
val solo = workforce.head
workforce.map(w => solo.busywork(w.id,1000000000))
}
def timed(f: => List[Int]) = {
val t0 = System.nanoTime
val result = f
val t1 = System.nanoTime
(result, t1-t0)
}
def go {
val serial = timed( serialFourSums )
val parallel = timed( parallelFourSums )
println("Serial result: " + serial._1)
println("Parallel result:" + parallel._1)
printf("Serial took %.3f seconds\n",serial._2*1e-9)
printf("Parallel took %.3f seconds\n",parallel._2*1e-9)
}
}
Basically, the idea is to create a collection of workers--one per workload--and then throw all the data at them with !! which immediately gives back a future. When you try to read the future, the sender blocks until the worker's actually done with the data.
You could rewrite the above so that PriceCalculator extended Actor instead, and valueAll coordinated the return of the data.
Note that you have to be careful passing non-immutable data around.
Anyway, on the machine I'm typing this from, if you run the above you get:
scala> (new ActorExample).go
Serial result: List(-1629056553, -1629056636, -1629056761, -1629056928)
Parallel result:List(-1629056553, -1629056636, -1629056761, -1629056928)
Serial took 1.532 seconds
Parallel took 0.443 seconds
(Obviously I have at least four cores; the parallel timing varies rather a bit depending on which worker gets what processor and what else is going on on the machine.)