I saw something
import java.util.concurrent.Executors
import scala.concurrent.{ExecutionContext, Await, Future}
import scala.concurrent.duration._
val numJobs = 50000
var numThreads = 10
// customize the execution context to use the specified number of threads
implicit val ec = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(numThreads))
// define the tasks
val tasks = for (i <- 1 to numJobs) yield Future {
// do something more fancy here
i
}
// aggregate and wait for final result
val aggregated = Future.sequence(tasks)
val oneToNSum = Await.result(aggregated, 15.seconds).sum
However, this doesn't finish with Success.
I dont know why. I think maybe the Await.result is cause.
Please notice the solution.
Related
I'm working on an application that has a couple of long-running streams going, where it subscribes to data about a certain entity and processes that data. These streams should be up 24/7, so we needed to handle failures (network issues etc).
For that purpose, we've wrapped our sources in RestartingSource.
I'm now trying to verify this behaviour, and while it looks like it functions, I'm struggling to create a test where I push in some data, verify that it processes correctly, then send an error, and verify that it reconnects after that and continues processing.
I've boiled that down to this minimal case:
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.scaladsl.{RestartSource, Sink, Source}
import akka.stream.testkit.TestPublisher
import org.scalatest.concurrent.Eventually
import org.scalatest.{FlatSpec, Matchers}
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext
class MinimalSpec extends FlatSpec with Matchers with Eventually {
"restarting a failed source" should "be testable" in {
implicit val sys: ActorSystem = ActorSystem("akka-grpc-measurements-for-test")
implicit val mat: ActorMaterializer = ActorMaterializer()
implicit val ec: ExecutionContext = sys.dispatcher
val probe = TestPublisher.probe[Int]()
val restartingSource = RestartSource
.onFailuresWithBackoff(1 second, 1 minute, 0d) { () => Source.fromPublisher(probe) }
var last: Int = 0
val sink = Sink.foreach { l: Int => last = l }
restartingSource.runWith(sink)
probe.sendNext(1)
eventually {
last shouldBe 1
}
probe.sendNext(2)
eventually {
last shouldBe 2
}
probe.sendError(new RuntimeException("boom"))
probe.expectSubscription()
probe.sendNext(3)
eventually {
last shouldBe 3
}
}
}
This test consistently fails on the last eventually block with Last failure message: 2 was not equal to 3. What am I missing here?
Edit: akka version is 2.5.31
I figured it out after having had a look at the TestPublisher code. Its subscription is a lazy val. So when RestartSource detects the error, and executes the factory method () => Source.fromPublisher(probe) again, it gets a new Source, but the subscription of the probe is still pointing to the old Source. Changing the code to initialize both a new Source and TestPublisher works.
I have an Rest API provided by akka-http. In some cases I need to get data from an external database (Apache HBase), and I would like the query to fail if the database takes too long to deliver the data.
One naïve way is to wrap the call inside a Future and then block it with an Await.result with the needed duration.
import scala.concurrent.duration._
import scala.concurrent.{Await, Future}
object AsyncTest1 extends App {
val future = Future {
getMyDataFromDB()
}
val myData = Await.result(future, 100.millis)
}
The seems to be inefficient as this implementation needs two threads. Is There an efficient way to do this ?
I have another use case where I want to send multiple queries in parallel and then aggregates the results, with the same delay limitation.
val future1 = Future {
getMyDataFromDB1()
}
val future2 = Future {
getMyDataFromDB2()
}
val foldedFuture = Future.fold(
Seq(future1, future2))(MyAggregatedData)(myAggregateFunction)
)
val myData = Await.result(foldedFuture, 100.millis)
Same question here, what is the most efficient way to implement this ?
Thanks for your help
One solution would be to use Akka's after function which will let you pass a duration, after which the future throws an exception or whatever you want.
Take a look here. It demonstrates how to implement this.
EDIT:
I guess I'll post the code here in case the link gets broken in future:
import scala.concurrent._
import scala.concurrent.duration._
import ExecutionContext.Implicits.global
import scala.util.{Failure, Success}
import akka.actor.ActorSystem
import akka.pattern.after
val system = ActorSystem("theSystem")
lazy val f = future { Thread.sleep(2000); true }
lazy val t = after(duration = 1 second, using = system.scheduler)(Future.failed(new TimeoutException("Future timed out!")))
val fWithTimeout = Future firstCompletedOf Seq(f, t)
fWithTimeout.onComplete {
case Success(x) => println(x)
case Failure(error) => println(error)
}
I know I'm doing something wrong with mutable.ListBuffer but I can't figure out how to fix it (and a proper explanation of the issue).
I simplified the code below to reproduce the behavior.
I'm basically trying to run functions in parallel to add elements to a list as my first list get processed. I end up "losing" elements.
import java.util.Properties
import scala.collection.mutable.ListBuffer
import scala.concurrent.duration.Duration
import scala.concurrent.{Await, Future}
import scala.concurrent.{ExecutionContext}
import ExecutionContext.Implicits.global
object MyTestObject {
var listBufferOfInts = new ListBuffer[Int]() // files that are processed
def runFunction(): Int = {
listBufferOfInts = new ListBuffer[Int]()
val inputListOfInts = 1 to 1000
val fut = Future.traverse(inputListOfInts) { i =>
Future {
appendElem(i)
}
}
Await.ready(fut, Duration.Inf)
listBufferOfInts.length
}
def appendElem(elem: Int): Unit = {
listBufferOfInts ++= List(elem)
}
}
MyTestObject.runFunction()
MyTestObject.runFunction()
MyTestObject.runFunction()
which returns:
res0: Int = 937
res1: Int = 992
res2: Int = 997
Obviously I would expect 1000 to be returned all the time. How can I fix my code to keep the "architecture" but make my ListBuffer "synchronized" ?
I don't know what exact problem is as you said you simplified it, but still you have an obvious race condition, multiple threads modify a single mutable collection and that is very bad. As other answers pointed out you need some locking so that only one thread could modify collection at the same time. If your calculations are heavy, appending result in synchronized way to a buffer shouldn't notably affect the performance but when in doubt always measure.
But synchronization is not needed, you can do something else instead, without vars and mutable state. Let each Future return your partial result and then merge them into a list, in fact Future.traverse does just that.
import scala.concurrent.duration._
import scala.concurrent.{Await, Future}
import scala.concurrent.ExecutionContext.Implicits.global
def runFunction: Int = {
val inputListOfInts = 1 to 1000
val fut: Future[List[Int]] = Future.traverse(inputListOfInts.toList) { i =>
Future {
// some heavy calculations on i
i * 4
}
}
val listOfInts = Await.result(fut, Duration.Inf)
listOfInts.size
}
Future.traverse already gives you an immutable list with all your results combined, no need to append them to a mutable buffer.
Needless to say, you will always get 1000 back.
# List.fill(10000)(runFunction).exists(_ != 1000)
res18: Boolean = false
I'm not sure the above shows what you are trying to do correctly. Maybe the issue is that you are actually sharing a var ListBuffer which you reinitialise within runFunction.
When I take this out I collect all the events I'm expecting correctly:
import java.util.Properties
import scala.collection.mutable.ListBuffer
import scala.concurrent.duration.Duration
import scala.concurrent.{ Await, Future }
import scala.concurrent.{ ExecutionContext }
import ExecutionContext.Implicits.global
object BrokenTestObject extends App {
var listBufferOfInts = ( new ListBuffer[Int]() )
def runFunction(): Int = {
val inputListOfInts = 1 to 1000
val fut = Future.traverse(inputListOfInts) { i =>
Future {
appendElem(i)
}
}
Await.ready(fut, Duration.Inf)
listBufferOfInts.length
}
def appendElem(elem: Int): Unit = {
listBufferOfInts.append( elem )
}
BrokenTestObject.runFunction()
BrokenTestObject.runFunction()
BrokenTestObject.runFunction()
println(s"collected ${listBufferOfInts.length} elements")
}
If you really have a synchronisation issue you can use something like the following:
import java.util.Properties
import scala.collection.mutable.ListBuffer
import scala.concurrent.duration.Duration
import scala.concurrent.{ Await, Future }
import scala.concurrent.{ ExecutionContext }
import ExecutionContext.Implicits.global
class WrappedListBuffer(val lb: ListBuffer[Int]) {
def append(i: Int) {
this.synchronized {
lb.append(i)
}
}
}
object MyTestObject extends App {
var listBufferOfInts = new WrappedListBuffer( new ListBuffer[Int]() )
def runFunction(): Int = {
val inputListOfInts = 1 to 1000
val fut = Future.traverse(inputListOfInts) { i =>
Future {
appendElem(i)
}
}
Await.ready(fut, Duration.Inf)
listBufferOfInts.lb.length
}
def appendElem(elem: Int): Unit = {
listBufferOfInts.append( elem )
}
MyTestObject.runFunction()
MyTestObject.runFunction()
MyTestObject.runFunction()
println(s"collected ${listBufferOfInts.lb.size} elements")
}
Changing
listBufferOfInts ++= List(elem)
to
synchronized {
listBufferOfInts ++= List(elem)
}
Make it work. Probably can become a performance issue? I'm still interested in an explanation and maybe a better way of doing things!
I wrote this method:
import scala.concurrent._
import ExecutionContext.Implicits.global
import scala.util.{ Success, Failure }
object FuturesSequence extends App {
val f1 = future {
1
}
val f2 = future {
2
}
val lf = List(f1, f2)
val seq = Future.sequence(lf)
seq.onSuccess {
case l => println(l)
}
}
I was expecting Future.sequence to gather a List[Future] into a Future[List] and then wait for every futures (f1 and f2 in my case) to complete before calling onSuccess on the Future[List] seq in my case.
But after many runs of this code, it prints "List(1, 2)" only once in a while and I can't figure out why it does not work as expected.
Try this for once,
import scala.concurrent._
import java.util.concurrent.Executors
import scala.util.{ Success, Failure }
object FuturesSequence extends App {
implicit val exec = ExecutionContext.fromExecutor(Executors.newCachedThreadPool)
val f1 = future {
1
}
val f2 = future {
2
}
val lf = List(f1, f2)
val seq = Future.sequence(lf)
seq.onSuccess {
case l => println(l)
}
}
This will always print List(1,2). The reason is simple, the exec above is an ExecutionContext of threads (not daemon threads) where as in your example the ExecutionContext was the default one implicitly taken from ExecutionContext.Implicits.global which contains daemon threads.
Hence being daemon, the process doesn't wait for seq future to be completed and terminates. if at all seq does get completed then it prints. But that doesn't happen always
The application is exiting before the future is completes.
You need to block until the future has completed. This can be achieved in a variety of ways, including changing the ExecutionContext, instantiating a new ThreadPool, Thread.sleep etc, or by using methods on scala.concurrent.Await
The simplest way for your code is by using Await.ready. This blocks on a future for a specified amount of time. In the modified code below, the application waits for 5 seconds before exiting.
Note also, the extra import scala.concurrent.duration so we can specify the time to wait.
import scala.concurrent._
import scala.concurrent.duration._
import java.util.concurrent.Executors
import scala.util.{ Success, Failure }
object FuturesSequence extends App {
val f1 = future {
1
}
val f2 = future {
2
}
val lf = List(f1, f2)
val seq = Future.sequence(lf)
seq.onSuccess {
case l => println(l)
}
Await.ready(seq, 5 seconds)
}
By using Await.result instead, you can skip the onSuccess method too, as it will return the resulting list to you.
Example:
val seq: List[Int] = Await.result(Future.sequence(lf), 5 seconds)
println(seq)
I am reading akkaScala documentation, there is an example (p. 171 bottom)
// imports added for compilation
import scala.concurrent.{ExecutionContext, Future}
import ExecutionContext.Implicits.global
class Some {
}
object Some {
def main(args: Array[String]) {
// Create a sequence of Futures
val futures = for (i <- 1 to 1000) yield Future(i * 2)
val futureSum = Future.fold(futures)(0)(_ + _)
futureSum foreach println
}
}
I run it, but nothing happened. I mean that nothing was in console output. What is wrong?
You don't wait for the future to complete, so you create a race between the program exiting and the futures completing and the side-effect running. On your machine, the future seems to lose the race, on the commenters' who say "it works", the future is winning the race.
You can use Await to block on a future and wait for it to complete. This is something you should only be doing "at the ends of the world", you should very rarely actually be using Await...
// imports added for compilation
import scala.concurrent.{ExecutionContext, Future}
import ExecutionContext.Implicits.global
import scala.concurrent.duration._ // for the "1 second" syntax
import scala.concurrent.Await
class Some {
}
object Some {
def main(args: Array[String]) {
// Create a sequence of Futures
val futures = for (i <- 1 to 1000) yield Future(i * 2)
val futureSum = Future.fold(futures)(0)(_ + _)
// we map instead of foreach, to make sure that the side-effect is part of the future
// and we "await" for the future to complete (for 1 second)
Await.result(futureSum map println, 1 second)
}
}
As others have stated, the issue is the race condition where the futures are competing with the program terminating. The JVM has a concept of daemon threads. It waits for non-daemon threads to terminate but not daemon threads. So if you want to wait for threads to complete, use non-daemon threads.
The way threads are created for scala futures is using an implicit scala.concurrent.ExecutionContext. The one you use (import ExecutionContext.Implicits.global) starts daemon threads. However, it is possible to use non-daemon threads. So if you use an ExecutionContext with non-daemon threads, it will wait, which in your case is reasonable behaviour. Naively:
import scala.concurrent.Future
import scala.concurrent.ExecutionContextExecutor
import scala.concurrent.ExecutionContext
class MyExecutionContext extends ExecutionContext {
override def execute(runnable:Runnable) = {
val t = new Thread(runnable)
t.setDaemon(false)
t.start()
}
override def reportFailure(t:Throwable) = t.printStackTrace
}
object Some {
implicit lazy val context: ExecutionContext = new MyExecutionContext
def main(args: Array[String]) {
// Create a sequence of Futures
val futures = for (i <- 1 to 1000) yield Future(i * 2)
val futureSum = Future.fold(futures)(0)(_ + _)
futureSum foreach println
}
}
Careful with using the above ExecutionContext in production because it doesn't use a thread pool and can create unbounded threads, but the message is: you can control everything about the threads behind Futures through an ExecutionContext. Explore the various scala and akka contexts to find what you need, or if nothing suits, write your own.
Both of the following statement at the end of main function would help your need. As the above answers said, allow the future to complete. Main thread is different from the Future thread, as main completes, it terminates before Future thread.
Thread.sleep(500) //... Simple solution
Await.result(futureSum, Duration(500, MILLISECONDS)) //...have to import scala.concurrent.duration._ to use Duration object.