How to manage synchronization between two Akka Actors? - scala

I have two functions to execute. First calculate() and then calculateSomethingElse(). My problem is to call the first one and wait before the end.
My first idea was to use Await.result like this
implicit val timeout = Timeout(10 seconds)
val future = ask(specificActor, Something(id))
Await.result(future, timeout.duration)
implicit val timeout = Timeout(10 seconds)
val future = ask(specificActor, SomethingElse(id))
Await.result(future, timeout.duration)
But finally, I don't really need a timeout. Just wait before the end of the fist called.
class HandlerActor extends Actor {
override def receive: Receive = {
case m1 =>
specificActor ! Something(id)
specificActor ! SomethingElse(id)
}
}
class SpecificActor extends Actor {
override def receive: Receive = {
Case Something(id) =>
myServ.calculate() // return Future[List[EitherErr[Message]]]
Case SomethingElse(id) =>
myServ.calculateSomethingElse() // returns Future[EitherErr[Unit]]
}
}

You need to call it like:
for {
r1 <- ask(specificActor, Something(id))
r2 <- ask(specificActor, SomethingElse(id))
} yield (r1, r2)
or a second option:
val result = ask(specificActor, Something(id)).flatMap(r1 => {
ask(specificActor, SomethingElse(id)).map(r2 => (r1, r2))
})
In both case calls will be called one by one, and you will get result if you need, but if one of them will fail, second will not called.

Related

FS2 Stream with StateT[IO, _, _], periodically dumping state

I have a program which consumes an infinite stream of data. Along the way I'd like to record some metrics, which form a monoid since they're just simple sums and averages. Periodically, I want to write out these metrics somewhere, clear them, and return to accumulating them. I have essentially:
object Foo {
type MetricsIO[A] = StateT[IO, MetricData, A]
def recordMetric(m: MetricData): MetricsIO[Unit] = {
StateT.modify(_.combine(m))
}
def sendMetrics: MetricsIO[Unit] = {
StateT.modifyF { s =>
val write: IO[Unit] = writeMetrics(s)
write.attempt.map {
case Left(_) => s
case Right(_) => Monoid[MetricData].empty
}
}
}
}
So most of the execution uses IO directly and lifts using StateT.liftF. And in certain situations, I include some calls to recordMetric. At the end of it I've got a stream:
val mainStream: Stream[MetricsIO, Bar] = ...
And I want to periodically, say every minute or so, dump the metrics, so I tried:
val scheduler: Scheduler = ...
val sendStream =
scheduler
.awakeEvery[MetricsIO](FiniteDuration(1, TimeUnit.Minutes))
.evalMap(_ => Foo.sendMetrics)
val result = mainStream.concurrently(sendStream).compile.drain
And then I do the usual top level program stuff of calling run with the start state and then calling unsafeRunSync.
The issue is, I only ever see empty metrics! I suspect it's something to with my monoid implicitly providing empty metrics to sendStream but I can't quite figure out why that should be or how to fix it. Maybe there's a way I can "interleave" these sendMetrics calls into the main stream instead?
Edit: here's a minimal complete runnable example:
import fs2._
import cats.implicits._
import cats.data._
import cats.effect._
import java.util.concurrent.Executors
import scala.concurrent.ExecutionContext
import scala.concurrent.duration._
val sec = Executors.newScheduledThreadPool(4)
implicit val ec = ExecutionContext.fromExecutorService(sec)
type F[A] = StateT[IO, List[String], A]
val slowInts = Stream.unfoldEval[F, Int, Int](1) { n =>
StateT(state => IO {
Thread.sleep(500)
val message = s"hello $n"
val newState = message :: state
val result = Some((n, n + 1))
(newState, result)
})
}
val ticks = Scheduler.fromScheduledExecutorService(sec).fixedDelay[F](FiniteDuration(1, SECONDS))
val slowIntsPeriodicallyClearedState = slowInts.either(ticks).evalMap[Int] {
case Left(n) => StateT.liftF(IO(n))
case Right(_) => StateT(state => IO {
println(state)
(List.empty, -1)
})
}
Now if I do:
slowInts.take(10).compile.drain.run(List.empty).unsafeRunSync
Then I get the expected result - the state properly accumulates into the output. But if I do:
slowIntsPeriodicallyClearedState.take(10).compile.drain.run(List.empty).unsafeRunSync
Then I see an empty list consistently printed out. I would have expected partial lists (approx. 2 elements) printed out.
StateT is not safe to use with effect types, because it's not safe in the face of concurrent access. Instead, consider using a Ref (from either fs2 or cats-effect, depending what version).
Something like this:
def slowInts(ref: Ref[IO, Int]) = Stream.unfoldEval[F, Int, Int](1) { n =>
val message = s"hello $n"
ref.modify(message :: _) *> IO {
Thread.sleep(500)
val result = Some((n, n + 1))
result
}
}
val ticks = Scheduler.fromScheduledExecutorService(sec).fixedDelay[IO](FiniteDuration(1, SECONDS))
def slowIntsPeriodicallyClearedState(ref: Ref[IO, Int] =
slowInts.either(ticks).evalMap[Int] {
case Left(n) => IO.pure(n)
case Right(_) =>
ref.modify(_ => Nil).flatMap { case Change(previous, now) =>
IO(println(now)).as(-1)
}
}

Processing unknown amount of actors results on single timeout

I am looking to expand the following code to work for an unknown amount of actor ask requests.
implicit val timeout = Timeout(100 millis)
val sendRequestActor = context.actorOf(Props(new SendRequest(request)), "Send_Request_".concat(getActorNumber))
val sendRequestActor2 = context.actorOf(Props(new SendRequest(request)), "Send_Request_".concat(getActorNumber))
val a1 = ask(sendRequestActor, Request).fallbackTo(Future.successful(RequestTimeout))
val a2 = ask(sendRequestActor2, Request).fallbackTo(Future.successful(RequestTimeout))
val result = for {
r1 <- a1
r2 <- a2
} yield(r1, r2)
val r = Await.result(result, 100 millis)
r match {
case (b: SuccessResponse, b2: SuccessResponse) => {
//Process Results
}
case (b: SuccessResponse, b2: RequestTimeout) => {
//Process Results
}
case (b: RequestTimeout, b2: SuccessResponse) => {
//Process Results
}
case (b: RequestTimeout, b2: RequestTimeout) => {
//Process Results
}
case _ => {}
}
I am trying to send out requests to a List of recipients(gotten from a previous database call). The number of recipients will vary each time this function is called. Recipients have a maximum of 100 milliseconds to respond before I want to time out their requests and record a RequestTimeout. The SendRequest actor will reply with SuccessResponse if the recipients respond. I am assuming I will have to change the val result for-loop to process a list, but I am unsure of how to structure everything so that I will wait the minimum amount of time(either when all actors return or when the timeout hits, whichever is lower). I do not need everything in a single return value like the example, I am fine with a list of results and matching type on each iteration.
Any help would be appreciated, please let me know if I can provide any other information.
Thank you
Edit:
Calling Class:
case object GetResponses
def main(args: Array[String]) {
val route = {
get {
complete {
//stuff
val req_list = List(req1,req2,req3)
val createRequestActor = system.actorOf(Props(new SendAll(req_list)), "Get_Response_Actor_" + getActorNumber)
val request_future = ask(createRequestActor, GetResponses).mapTo[List[Any]]
Thread.sleep(1000)
println(request_future)
//more stuff
}
}
}
Http().bindAndHandle(route, "localhost", 8080)
}
Updated Sending Class:
class SendAll(requests: List[Request]) extends Actor {
import context.{become,dispatcher}
var numProcessed = 0
var results: List[Any] = List()
requests.foreach(self ! _)
implicit val timeout = Timeout(100 millis)
def receive = {
case r: RequestMsg =>
val sendRequestActor = context.actorOf(Props(new SendRequest(r)), "Send_Request_".concat(getActorNumber))
(sendRequestActor ? Request).pipeTo(self)
case s: SuccessResponse =>
println("Got Success")
results = results :+ s
println(results.size + " == " + requests.size)
if(results.size == requests.size) {
println("Before done")
become(done)
}
case akka.actor.Status.Failure(f) =>
println("Got Failed")
results = results :+ RequestTimeout
if(results.size == requests.size) {
become(done)
}
case m =>
println("Got Other")
}
def done: Receive = {
case GetResponses =>
println("Done")
sender ! results
case _ => {
println("Done as well")
}
}
}
Output
Got Success
1 == 3
Got Success
2 == 3
Got Success
3 == 3
Before done
Future(<not completed>)
I would pass the list of requests to the actor, then pipe the responses from the child actors to self instead of using Await.result. For example:
class Handler(requests: List[RequestMsg]) extends Actor {
import context.{become, dispatcher}
var numProcessed = 0
var results: List[Any] = List()
requests.foreach(self ! _)
implicit val timeout = Timeout(100.millis)
def receive = {
case r: RequestMsg =>
val sendRequestActor = context.actorOf(Props(new SendRequest(r)), "Send_Request".concat(getActorNumber))
(sendRequestActor ? Request).pipeTo(self)
case s: SuccessResponse =>
println(s"response: $s")
results = results :+ s
if (results.size == requests.size)
become(done)
case akka.actor.Status.Failure(f) =>
println("a request failed or timed out")
results = results :+ RequestTimeout
if (results.size == requests.size)
become(done)
case m =>
println(s"Unhandled message received while processing requests: $m")
sender ! NotDone
}
def done: Receive = {
case GetResponses =>
println("sending responses")
sender ! results
}
}
You would instantiate an actor for every list of requests:
val requests1 = List(RequestMsg("one"), RequestMsg("two"), RequestMsg("three"))
val handler1 = system.actorOf(Props(new Handler(requests1)))
In this example--following the principle that an actor should have a distinct, limited sphere of responsibility--the actor simply coordinates requests and responses; it doesn't perform any processing on the collected responses. The idea is that another actor would send this actor a GetResponses messages in order to get the responses and process them (or this actor would proactively send the results to a processing actor).
The simplest solution is put all your actor refs into the List map it to List[Future] and use Future.sequence to obtain Future[List].
val route = {
get {
val listActorRefs = List(actorRef1, actorRef2, ...)
val futureListResponses = Future.sequence(listActorRefs.map(_ ? Request))
onComplete(futureListResponses) {
case Success(listResponse) => ...
complete(...)
case Failure(exception) => ...
}
}
}
A better solution is avoid a lot of actor' asks, prepare some ResponseCollector actor which will send all your message (I suggest to look at BroadcastPool) and schedule one message for itself to stop waiting and return result.

Iterate data source asynchronously in batch and stop while remote return no data in Scala

Let's say we have a fake data source which will return data it holds in batch
class DataSource(size: Int) {
private var s = 0
implicit val g = scala.concurrent.ExecutionContext.global
def getData(): Future[List[Int]] = {
s = s + 1
Future {
Thread.sleep(Random.nextInt(s * 100))
if (s <= size) {
List.fill(100)(s)
} else {
List()
}
}
}
object Test extends App {
val source = new DataSource(100)
implicit val g = scala.concurrent.ExecutionContext.global
def process(v: List[Int]): Unit = {
println(v)
}
def next(f: (List[Int]) => Unit): Unit = {
val fut = source.getData()
fut.onComplete {
case Success(v) => {
f(v)
v match {
case h :: t => next(f)
}
}
}
}
next(process)
Thread.sleep(1000000000)
}
I have mine, the problem here is some portion is more not pure. Ideally, I would like to wrap the Future for each batch into a big future, and the wrapper future success when last batch returned 0 size list? My situation is a little from this post, the next() there is synchronous call while my is also async.
Or is it ever possible to do what I want? Next batch will only be fetched when the previous one is resolved in the end whether to fetch the next batch depends on the size returned?
What's the best way to walk through this type of data sources? Are there any existing Scala frameworks that provide the feature I am looking for? Is play's Iteratee, Enumerator, Enumeratee the right tool? If so, can anyone provide an example on how to use those facilities to implement what I am looking for?
Edit----
With help from chunjef, I had just tried out. And it actually did work out for me. However, there was some small change I made based on his answer.
Source.fromIterator(()=>Iterator.continually(source.getData())).mapAsync(1) (f=>f.filter(_.size > 0))
.via(Flow[List[Int]].takeWhile(_.nonEmpty))
.runForeach(println)
However, can someone give comparison between Akka Stream and Play Iteratee? Does it worth me also try out Iteratee?
Code snip 1:
Source.fromIterator(() => Iterator.continually(ds.getData)) // line 1
.mapAsync(1)(identity) // line 2
.takeWhile(_.nonEmpty) // line 3
.runForeach(println) // line 4
Code snip 2: Assuming the getData depends on some other output of another flow, and I would like to concat it with the below flow. However, it yield too many files open error. Not sure what would cause this error, the mapAsync has been limited to 1 as its throughput if I understood correctly.
Flow[Int].mapConcat[Future[List[Int]]](c => {
Iterator.continually(ds.getData(c)).to[collection.immutable.Iterable]
}).mapAsync(1)(identity).takeWhile(_.nonEmpty).runForeach(println)
The following is one way to achieve the same behavior with Akka Streams, using your DataSource class:
import scala.concurrent.Future
import scala.util.Random
import akka.actor.ActorSystem
import akka.stream._
import akka.stream.scaladsl._
object StreamsExample extends App {
implicit val system = ActorSystem("Sandbox")
implicit val materializer = ActorMaterializer()
val ds = new DataSource(100)
Source.fromIterator(() => Iterator.continually(ds.getData)) // line 1
.mapAsync(1)(identity) // line 2
.takeWhile(_.nonEmpty) // line 3
.runForeach(println) // line 4
}
class DataSource(size: Int) {
...
}
A simplified line-by-line overview:
line 1: Creates a stream source that continually calls ds.getData if there is downstream demand.
line 2: mapAsync is a way to deal with stream elements that are Futures. In this case, the stream elements are of type Future[List[Int]]. The argument 1 is the level of parallelism: we specify 1 here because DataSource internally uses a mutable variable, and a parallelism level greater than one could produce unexpected results. identity is shorthand for x => x, which basically means that for each Future, we pass its result downstream without transforming it.
line 3: Essentially, ds.getData is called as long as the result of the Future is a non-empty List[Int]. If an empty List is encountered, processing is terminated.
line 4: runForeach here takes a function List[Int] => Unit and invokes that function for each stream element.
Ideally, I would like to wrap the Future for each batch into a big future, and the wrapper future success when last batch returned 0 size list?
I think you are looking for a Promise.
You would set up a Promise before you start the first iteration.
This gives you promise.future, a Future that you can then use to follow the completion of everything.
In your onComplete, you add a case _ => promise.success().
Something like
def loopUntilDone(f: (List[Int]) => Unit): Future[Unit] = {
val promise = Promise[Unit]
def next(): Unit = source.getData().onComplete {
case Success(v) =>
f(v)
v match {
case h :: t => next()
case _ => promise.success()
}
case Failure(e) => promise.failure(e)
}
// get going
next(f)
// return the Future for everything
promise.future
}
// future for everything, this is a `Future[Unit]`
// its `onComplete` will be triggered when there is no more data
val everything = loopUntilDone(process)
You are probably looking for a reactive streams library. My personal favorite (and one I'm most familiar with) is Monix. This is how it will work with DataSource unchanged
import scala.concurrent.duration.Duration
import scala.concurrent.Await
import monix.reactive.Observable
import monix.execution.Scheduler.Implicits.global
object Test extends App {
val source = new DataSource(100)
val completed = // <- this is Future[Unit], completes when foreach is done
Observable.repeat(Observable.fromFuture(source.getData()))
.flatten // <- Here it's Observable[List[Int]], it has collection-like methods
.takeWhile(_.nonEmpty)
.foreach(println)
Await.result(completed, Duration.Inf)
}
I just figured out that by using flatMapConcat can achieve what I wanted to achieve. There is no point to start another question as I have had the answer already. Put my sample code here just in case someone is looking for similar answer.
This type of API is very common for some integration between traditional Enterprise applications. The DataSource is to mock the API while the object App is to demonstrate how the client code can utilize Akka Stream to consume the APIs.
In my small project the API was provided in SOAP, and I used scalaxb to transform the SOAP to Scala async style. And with the client calls demonstrated in the object App, we can consume the API with AKKA Stream. Thanks for all for the help.
class DataSource(size: Int) {
private var transactionId: Long = 0
private val transactionCursorMap: mutable.HashMap[TransactionId, Set[ReadCursorId]] = mutable.HashMap.empty
private val cursorIteratorMap: mutable.HashMap[ReadCursorId, Iterator[List[Int]]] = mutable.HashMap.empty
implicit val g = scala.concurrent.ExecutionContext.global
case class TransactionId(id: Long)
case class ReadCursorId(id: Long)
def startTransaction(): Future[TransactionId] = {
Future {
synchronized {
transactionId += transactionId
}
val t = TransactionId(transactionId)
transactionCursorMap.update(t, Set(ReadCursorId(0)))
t
}
}
def createCursorId(t: TransactionId): ReadCursorId = {
synchronized {
val c = transactionCursorMap.getOrElseUpdate(t, Set(ReadCursorId(0)))
val currentId = c.foldLeft(0l) { (acc, a) => acc.max(a.id) }
val cId = ReadCursorId(currentId + 1)
transactionCursorMap.update(t, c + cId)
cursorIteratorMap.put(cId, createIterator)
cId
}
}
def createIterator(): Iterator[List[Int]] = {
(for {i <- 1 to 100} yield List.fill(100)(i)).toIterator
}
def startRead(t: TransactionId): Future[ReadCursorId] = {
Future {
createCursorId(t)
}
}
def getData(cursorId: ReadCursorId): Future[List[Int]] = {
synchronized {
Future {
Thread.sleep(Random.nextInt(100))
cursorIteratorMap.get(cursorId) match {
case Some(i) => i.next()
case _ => List()
}
}
}
}
}
object Test extends App {
val source = new DataSource(10)
implicit val system = ActorSystem("Sandbox")
implicit val materializer = ActorMaterializer()
implicit val g = scala.concurrent.ExecutionContext.global
//
// def process(v: List[Int]): Unit = {
// println(v)
// }
//
// def next(f: (List[Int]) => Unit): Unit = {
// val fut = source.getData()
// fut.onComplete {
// case Success(v) => {
// f(v)
// v match {
//
// case h :: t => next(f)
//
// }
// }
//
// }
//
// }
//
// next(process)
//
// Thread.sleep(1000000000)
val s = Source.fromFuture(source.startTransaction())
.map { e =>
source.startRead(e)
}
.mapAsync(1)(identity)
.flatMapConcat(
e => {
Source.fromIterator(() => Iterator.continually(source.getData(e)))
})
.mapAsync(5)(identity)
.via(Flow[List[Int]].takeWhile(_.nonEmpty))
.runForeach(println)
/*
val done = Source.fromIterator(() => Iterator.continually(source.getData())).mapAsync(1)(identity)
.via(Flow[List[Int]].takeWhile(_.nonEmpty))
.runFold(List[List[Int]]()) { (acc, r) =>
// println("=======" + acc + r)
r :: acc
}
done.onSuccess {
case e => {
e.foreach(println)
}
}
done.onComplete(_ => system.terminate())
*/
}

How to simplify future result handling in Akka/Futures?

I want to simplify my for comprehension code to make it as simple as possible.
Here is the code
case object Message
class SimpleActor extends Actor {
def receive = {
case Message => sender ! Future { "Hello" }
}
}
object SimpleActor extends App {
val test = ActorSystem("Test")
val sa = test.actorOf(Props[SimpleActor])
implicit val timeout = Timeout(2.seconds)
val fRes = for {
f <- (sa ? Message).asInstanceOf[Future[Future[String]]]
r <- f
} yield r
println {
Await.result(fRes, 5.seconds)
}
}
Is it possible to get rid of this part
.asInstanceOf[Future[Future[String]]]
?
Look at the pipeTo function which is all about passing a Future between Actors. See here on ask patterns.
myFuture pipeTo sender
The return type of your ask will be Future[String] in your case, which as your comment below asks will need the mapTo[String] to actually get the result type to be a String. Thus your for-comp could be ditched and directly called:
val fRes = (sa ? Message).mapTo[String]

Resolving Akka futures from ask in the event of a failure

I am calling an Actor using the ask pattern within a Spray application, and returning the result as the HTTP response. I map failures from the actor to a custom error code.
val authActor = context.actorOf(Props[AuthenticationActor])
callService((authActor ? TokenAuthenticationRequest(token)).mapTo[LoggedInUser]) { user =>
complete(StatusCodes.OK, user)
}
def callService[T](f: => Future[T])(cb: T => RequestContext => Unit) = {
onComplete(f) {
case Success(value: T) => cb(value)
case Failure(ex: ServiceException) => complete(ex.statusCode, ex.errorMessage)
case e => complete(StatusCodes.InternalServerError, "Unable to complete the request. Please try again later.")
//In reality this returns a custom error object.
}
}
This works correctly when the authActor sends a failure, but if the authActor throws an exception, nothing happens until the ask timeout completes. For example:
override def receive: Receive = {
case _ => throw new ServiceException(ErrorCodes.AuthenticationFailed, "No valid session was found for that token")
}
I know that the Akka docs say that
To complete the future with an exception you need send a Failure message to the sender. This is not done automatically when an actor throws an exception while processing a message.
But given that I use asks for a lot of the interface between the Spray routing actors and the service actors, I would rather not wrap the receive part of every child actor with a try/catch. Is there a better way to achieve automatic handling of exceptions in child actors, and immediately resolve the future in the event of an exception?
Edit: this is my current solution. However, it's quite messy to do this for every child actor.
override def receive: Receive = {
case default =>
try {
default match {
case _ => throw new ServiceException("")//Actual code would go here
}
}
catch {
case se: ServiceException =>
logger.error("Service error raised:", se)
sender ! Failure(se)
case ex: Exception =>
sender ! Failure(ex)
throw ex
}
}
That way if it's an expected error (i.e. ServiceException), it's handled by creating a failure. If it's unexpected, it returns a failure immediately so the future is resolved, but then throws the exception so it can still be handled by the SupervisorStrategy.
If you want a way to provide automatic sending of a response back to the sender in case of an unexpected exception, then something like this could work for you:
trait FailurePropatingActor extends Actor{
override def preRestart(reason:Throwable, message:Option[Any]){
super.preRestart(reason, message)
sender() ! Status.Failure(reason)
}
}
We override preRestart and propagate the failure back to the sender as a Status.Failure which will cause an upstream Future to be failed. Also, it's important to call super.preRestart here as that's where child stopping happens. Using this in an actor looks something like this:
case class GetElement(list:List[Int], index:Int)
class MySimpleActor extends FailurePropatingActor {
def receive = {
case GetElement(list, i) =>
val result = list(i)
sender() ! result
}
}
If I was to call an instance of this actor like so:
import akka.pattern.ask
import concurrent.duration._
val system = ActorSystem("test")
import system.dispatcher
implicit val timeout = Timeout(2 seconds)
val ref = system.actorOf(Props[MySimpleActor])
val fut = ref ? GetElement(List(1,2,3), 6)
fut onComplete{
case util.Success(result) =>
println(s"success: $result")
case util.Failure(ex) =>
println(s"FAIL: ${ex.getMessage}")
ex.printStackTrace()
}
Then it would properly hit my Failure block. Now, the code in that base trait works well when Futures are not involved in the actor that is extending that trait, like the simple actor here. But if you use Futures then you need to be careful as exceptions that happen in the Future don't cause restarts in the actor and also, in preRestart, the call to sender() will not return the correct ref because the actor has already moved into the next message. An actor like this shows that issue:
class MyBadFutureUsingActor extends FailurePropatingActor{
import context.dispatcher
def receive = {
case GetElement(list, i) =>
val orig = sender()
val fut = Future{
val result = list(i)
orig ! result
}
}
}
If we were to use this actor in the previous test code, we would always get a timeout in the failure situation. To mitigate that, you need to pipe the results of futures back to the sender like so:
class MyGoodFutureUsingActor extends FailurePropatingActor{
import context.dispatcher
import akka.pattern.pipe
def receive = {
case GetElement(list, i) =>
val fut = Future{
list(i)
}
fut pipeTo sender()
}
}
In this particular case, the actor itself is not restarted because it did not encounter an uncaught exception. Now, if your actor needed to do some additional processing after the future, you can pipe back to self and explicitly fail when you get a Status.Failure:
class MyGoodFutureUsingActor extends FailurePropatingActor{
import context.dispatcher
import akka.pattern.pipe
def receive = {
case GetElement(list, i) =>
val fut = Future{
list(i)
}
fut.to(self, sender())
case d:Double =>
sender() ! d * 2
case Status.Failure(ex) =>
throw ex
}
}
If that behavior becomes common, you can make it available to whatever actors need it like so:
trait StatusFailureHandling{ me:Actor =>
def failureHandling:Receive = {
case Status.Failure(ex) =>
throw ex
}
}
class MyGoodFutureUsingActor extends FailurePropatingActor with StatusFailureHandling{
import context.dispatcher
import akka.pattern.pipe
def receive = myReceive orElse failureHandling
def myReceive:Receive = {
case GetElement(list, i) =>
val fut = Future{
list(i)
}
fut.to(self, sender())
case d:Double =>
sender() ! d * 2
}
}