Say I define a real-time thermometer actor thus:
case class Temperature(centigrade: Int)
object Thermometer {
trait Command
case class Test(replyTo: ActorRef[Temperature]) extends Command
def apply(): Behavior[Command] = Behaviors.receiveMessage {
case Test(replyTo) =>
replyTo ! Temperature(???) // take a real time measurement
Behaviors.same
}
}
And an actor that reacts to temperature changes as follows:
def reactor(thermometer: ActorRef[Thermometer.Command], floor: Int, ceiling: Int): Behavior[Temperature] = {
def next(context: ActorContext[Temperature]): Behavior[Temperature] = Behaviors.receiveMessage {
case Temperature(centigrade) =>
thermometer ! Thermometer.Test(context.self)
if (centigrade < floor) low(context)
else if (centigrade > ceiling) high(context)
else next(context)
}
def low(context: ActorContext[Temperature]): Behavior[Temperature] = Behaviors.receiveMessage {
case Temperature(_) =>
// Carry out behavior specific to a previous low temperature
next(context)
}
def high(context: ActorContext[Temperature]): Behavior[Temperature] = Behaviors.receiveMessage {
case Temperature(_) =>
// Carry out behavior specific to a previous high temperature
next(context)
}
Behaviors.setup(next(_))
}
I want to add an actor that simulates temperature changes by reading them from a database. I would want to do this in such a way that I do not read the next simulated temperature until I know my reactor has processed the previous one:
object SimulatedThermometer {
trait Command
case class Simulate(temp: Temperature) extends Command
case class Ack(reactor: Behavior[Temperature]) extends Command
def apply(temps: Seq[Int], reactor: Behavior[Temperature]) = {
def iterate(temps: List[Int], reactor: Behavior[Temperature]): Behavior[Command] =
Behaviors.receive {
case (ctx, Simulate(temp)) =>
// Our reactor should process temperature here
val newReactor = reactor(temp) // HOW TO???
ctx.self ! Ack(newReactor)
Behaviors.same
case (ctx, Ack(behavior)) => temps match {
case h :: t =>
// Can now proceed to process next temperature
ctx.self ! Simulate(Temperature(h))
iterate(t, behavior)
case _ =>
Behaviors.stopped
}
}
iterate(temps.toList, reactor)
}
}
But is there any way in Akka that I can manually execute this Behavior and transform it to the next Behavior?
Note. I know it would be possible to change my reactor behavior to send a reply when it has processed the message. But this is not necessary for the real-time behavior and the reactor should not have to be aware of whether its acting in real-time or in simulation
This sounds like a use case for the BehaviorTestKit, which would look something like:
def apply(temps: Seq[Int], reactor: Behavior[Temperature]): Unit = {
val testkit = BehaviorTestKit(reactor)
temps.foreach { temp =>
testkit.run(Temperature(temp))
// testkit is guaranteed to have fully processed the message
// assert effects on the testkit
// example: you can even save behavior (mainly to spawn a new testkit) and time travel (modulo side effects)
val nextBehavior = testkit.currentBehavior
()
}
}
Related
I am trying to split a big chunk of text into multiple paragraphs and process it concurrently by calling an external API.
An immutable list is updated each time the response comes from the API for the paragraph.
Once the paragraphs are processed and the list is updated, I would like to ask the Actor for the final status to be used in the next steps.
The problem with the below approach is that I would never know when all the paragraphs are processed.
I need to get back the targetStore once all the paragraphs are processed and the list is final.
def main(args: Array[String]) {
val source = Source.fromFile("input.txt")
val extDelegator = new ExtractionDelegator()
source.getLines().foreach(line => extDelegator.processParagraph(line))
extDelegator.getFinalResult()
}
case class Extract(uuid: UUID, text: String)
case class UpdateList(text: String)
case class DelegateLambda(text: String)
case class FinalResult()
class ExtractionDelegator {
val system = ActorSystem("ExtractionDelegator")
val extActor = system.actorOf(Props(classOf[ExtractorDelegateActor]).withDispatcher("fixed-thread-pool"))
implicit val executionContext = system.dispatchers.lookup("fixed-thread-pool")
def processParagraph(text: String) = {
extActor ! Extract(uuid, text)
}
def getFinalResult(): java.util.List[String] = {
implicit val timeout = Timeout(5 seconds)
val askActor = system.actorOf(Props(classOf[ExtractorDelegateActor]))
val future = askActor ? FinalResult()
val result = Await.result(future, timeout.duration).asInstanceOf[java.util.List[String]]
result
}
def shutdown(): Unit = {
system.terminate()
}
}
/* Extractor Delegator actor*/
class ExtractorDelegateActor extends Actor with ActorLogging {
var targetStore:scala.collection.immutable.List[String] = scala.collection.immutable.List.empty
def receive = {
case Extract(uuid, text) => {
context.actorOf(Props[ExtractProcessor].withDispatcher("fixed-thread-pool")) ! DelegateLambda(text)
}
case UpdateList(res) => {
targetStore = targetStore :+ res
}
case FinalResult() => {
val senderActor=sender()
senderActor ! targetStore
}
}
}
/* Aggregator actor*/
class ExtractProcessor extends Actor with ActorLogging {
def receive = {
case DelegateLambda(text) => {
val res =callLamdaService(text)
sender ! UpdateList(res)
}
}
def callLamdaService(text: String): String = {
//THis is where external API is called.
Thread.sleep(1000)
result
}
}
Not sure why you want to use actors here, most simple would be to
// because you call external service, you have back async response most probably
def callLamdaService(text: String): Future[String]
and to process your text you do
implicit val ec = scala.concurrent.ExecutionContext.Implicits.global // use you execution context here
Future.sequence(source.getLines().map(callLamdaService)).map {results =>
// do what you want with results
}
If you still want to use actors, you can do it replacing callLamdaService to processParagraph which internally will do ask to worker actor, who returns result (so, signature for processParagraph will be def processParagraph(text: String): Future[String])
If you still want to start multiple tasks and then ask for result, then you just need to use context.become with receive(worker: Int), when you increase amount of workers for each Extract message and decrease amount of workers on each UpdateList message. You will also need to implement then delayed processing of FinalResult for the case of non-zero amount of processing workers.
I am trying to implement a simple one-to-many pub/sub pattern using a BroadcastHub. This fails silently for large numbers of subscribers, which makes me think I am hitting some limit on the number of streams I can run.
First, let's define some events:
sealed trait Event
case object EX extends Event
case object E1 extends Event
case object E2 extends Event
case object E3 extends Event
case object E4 extends Event
case object E5 extends Event
I have implemented the publisher using a BroadcastHub, adding a Sink.actorRefWithAck each time I want to add a new subscriber. Publishing the EX event ends the broadcast:
trait Publisher extends Actor with ActorLogging {
implicit val materializer = ActorMaterializer()
private val sourceQueue = Source.queue[Event](Publisher.bufferSize, Publisher.overflowStrategy)
private val (
queue: SourceQueueWithComplete[Event],
source: Source[Event, NotUsed]
) = {
val (q,s) = sourceQueue.toMat(BroadcastHub.sink(bufferSize = 256))(Keep.both).run()
s.runWith(Sink.ignore)
(q,s)
}
def publish(evt: Event) = {
log.debug("Publishing Event: {}", evt.getClass().toString())
queue.offer(evt)
evt match {
case EX => queue.complete()
case _ => Unit
}
}
def subscribe(actor: ActorRef, ack: ActorRef): Unit =
source.runWith(
Sink.actorRefWithAck(
actor,
onInitMessage = Publisher.StreamInit(ack),
ackMessage = Publisher.StreamAck,
onCompleteMessage = Publisher.StreamDone,
onFailureMessage = onErrorMessage))
def onErrorMessage(ex: Throwable) = Publisher.StreamFail(ex)
def publisherBehaviour: Receive = {
case Publisher.Subscribe(sub, ack) => subscribe(sub, ack.getOrElse(sender()))
case Publisher.StreamAck => Unit
}
override def receive = LoggingReceive { publisherBehaviour }
}
object Publisher {
final val bufferSize = 5
final val overflowStrategy = OverflowStrategy.backpressure
case class Subscribe(sub: ActorRef, ack: Option[ActorRef])
case object StreamAck
case class StreamInit(ack: ActorRef)
case object StreamDone
case class StreamFail(ex: Throwable)
}
Subscribers can implement the Subscriber trait to separate the logic:
trait Subscriber {
def onInit(publisher: ActorRef): Unit = ()
def onInit(publisher: ActorRef, k: KillSwitch): Unit = onInit(publisher)
def onEvent(event: Event): Unit = ()
def onDone(publisher: ActorRef, subscriber: ActorRef): Unit = ()
def onFail(e: Throwable, publisher: ActorRef, subscriber: ActorRef): Unit = ()
}
The actor logic is quite simple:
class SubscriberActor(subscriber: Subscriber) extends Actor with ActorLogging {
def subscriberBehaviour: Receive = {
case Publisher.StreamInit(ack) => {
log.debug("Stream initialized.")
subscriber.onInit(sender())
sender() ! Publisher.StreamAck
ack.forward(Publisher.StreamInit(ack))
}
case Publisher.StreamDone => {
log.debug("Stream completed.")
subscriber.onDone(sender(),self)
}
case Publisher.StreamFail(ex) => {
log.error(ex, "Stream failed!")
subscriber.onFail(ex,sender(),self)
}
case e: Event => {
log.debug("Observing Event: {}",e)
subscriber.onEvent(e)
sender() ! Publisher.StreamAck
}
}
override def receive = LoggingReceive { subscriberBehaviour }
}
One of the key points is that all subscribers must receive all messages sent by the publisher, so we have to know that all streams have materialized and all actors are ready to receive before starting the broadcast. This is why the StreamInit message is forwarded to another, user-provided actor.
To test this, I define a simple MockPublisher that just broadcasts a list of events when told to do so:
class MockPublisher(events: Event*) extends Publisher {
def receiveBehaviour: Receive = {
case MockPublish => events map publish
}
override def receive = LoggingReceive { receiveBehaviour orElse publisherBehaviour }
}
case object MockPublish
I also define a MockSubscriber who merely counts how many events it has seen:
class MockSubscriber extends Subscriber {
var count = 0
val promise = Promise[Int]()
def future = promise.future
override def onInit(publisher: ActorRef): Unit = count = 0
override def onEvent(event: Event): Unit = count += 1
override def onDone(publisher: ActorRef, subscriber: ActorRef): Unit = promise.success(count)
override def onFail(e: Throwable, publisher: ActorRef, subscriber: ActorRef): Unit = promise.failure(e)
}
And a small method for subscription:
object MockSubscriber {
def sub(publisher: ActorRef, ack: ActorRef)(implicit system: ActorSystem): Future[Int] = {
val s = new MockSubscriber()
implicit val tOut = Timeout(1.minute)
val a = system.actorOf(Props(new SubscriberActor(s)))
val f = publisher ! Publisher.Subscribe(a, Some(ack))
s.future
}
}
I put everything together in a unit test:
class SubscriberTests extends TestKit(ActorSystem("SubscriberTests")) with
WordSpecLike with Matchers with BeforeAndAfterAll with ImplicitSender {
override def beforeAll:Unit = {
system.eventStream.setLogLevel(Logging.DebugLevel)
}
override def afterAll:Unit = {
println("Shutting down...")
TestKit.shutdownActorSystem(system)
}
"The Subscriber" must {
"publish events to many observers" in {
val n = 9
val p = system.actorOf(Props(new MockPublisher(E1,E2,E3,E4,E5,EX)))
val q = scala.collection.mutable.Queue[Future[Int]]()
for (i <- 1 to n) {
q += MockSubscriber.sub(p,self)
}
for (i <- 1 to n) {
expectMsgType[Publisher.StreamInit](70.seconds)
}
p ! MockPublish
q.map { f => Await.result(f, 10.seconds) should be (6) }
}
}
}
This test succeeds for relatively small values of n, but fails for, say, val n = 90000. No caught or uncaught exception appears anywhere and neither does any out-of-memory complaint from Java (which does occur if I go even higher).
What am I missing?
Edit: Tried this on multiple computers with different specs. Debug info shows no messages reach any of the subscribers once n is high enough.
Akka Stream (and any other reactive stream, actually) provides you backpressure. If you hadn't messed up with how you create your consumers (e.g. allowing creation of 1GB JSON, which will you chop into smaller pieces only after you fetched it into memory) you should have a comfortable situation where you can consider your memory usage pretty much upper-bounded (because of how backpressure manage push-pull mechanics). Once you measure where your upper-bound lies, your can set up your JVM and container memory, so that you could let it run without fear of out of memory errors (provided that there is not other thing happening in your JVM which could cause memory usage spike).
So, from this we can see that there is some constraint on how much stream you can run in parallel - specifically you can run only as much of them as your memory allows you. CPU should not be a limitation (as you will have multiple threads), but if you will start too much of them on one machine, then CPU inevitably with have to switch between different streams making each of them slower. It might not be a technical blocker, but you might end up in a situation where processing is so slow that it doesn't fulfill its business purpose (though, I guess, you would have to run much more than few of streams at once).
In your tests you might run into some other issues as well. E.g. if you reuse the same thread pool for some blocking operations as you use for Actor System without informing the thread pool that they are blocking, you might end up with a dead lock (as a matter of the fact, you should run all IO blocking operations on a different thread pool than "computing" operations). Having 90000(!) concurrent things happening at the same time (and probably having the same small thread pool) almost guarantees running into issues (I guess you could run into issues even if instead of actors you would run the code directly on futures). Here you are using actor system in tests, which AFAIR use blocking logic only highlighting all the possible issues with small thread pools which keep blocking and non-blocking tasks in the same place.
I am doing my small research that implement Actor without Akka
I found one implementation of Actor in Scala. (How to implement actor model without Akka?)
It's very simple. Because I have not enough reputation to add the comment, so I create this question.
I wonder if I use Actor like below.
1/ How can I shutdown that actor from main thread?
2/ How can I add feature similar to Akka, like parent actor, kill request, and become method?
import scala.concurrent._
trait Actor[T] {
implicit val context = ExecutionContext.fromExecutor(java.util.concurrent.Executors.newFixedThreadPool(1))
def receive: T => Unit
def !(m: T) = Future { receive(m) }
}
This is my own example when trying to adapt the above code snippet
import scala.concurrent._
/**
* Created by hminle on 10/21/2016.
*/
trait Message
case class HelloMessage(hello: String) extends Message
case class GoodByeMessage(goodBye: String) extends Message
object State extends Enumeration {
type State = Value
val Waiting, Running, Terminating = Value
}
trait Actor[T] {
implicit val context = ExecutionContext.fromExecutor(java.util.concurrent.Executors.newFixedThreadPool(1))
private var state: State.State = State.Waiting
def handleMessage: T => Unit ={
if(state == State.Waiting) handleMessageWhenWaiting
else if(state == State.Running) handleMessageWhenRunning
else handleMessageWhenTerminating
}
def !(m: T) = Future {handleMessage(m)}
def handleMessageWhenWaiting: T => Unit
def handleMessageWhenRunning: T => Unit
def handleMessageWhenTerminating: T => Unit
def transitionTo(destinationState: State.State): Unit = {
this.state = destinationState
}
}
class Component1 extends Actor[Message]{
def handleMessageWhenRunning = {
case HelloMessage(hello) => {
println(Thread.currentThread().getName + hello)
}
case GoodByeMessage(goodBye) => {
println(Thread.currentThread().getName + goodBye)
transitionTo(State.Terminating)
}
}
def handleMessageWhenWaiting = {
case m => {
println(Thread.currentThread().getName + " I am waiting, I am not ready to run")
transitionTo(State.Running)
}
}
def handleMessageWhenTerminating = {
case m => {
println(Thread.currentThread().getName + " I am terminating, I cannot handle any message")
//need to shutdown here
}
}
}
class Component2(component1: Actor[Message]) extends Actor[Message]{
def handleMessageWhenRunning = {
case HelloMessage(hello) => {
println(Thread.currentThread().getName + hello)
component1 ! HelloMessage("hello 1")
}
case GoodByeMessage(goodBye) => {
println(Thread.currentThread().getName + goodBye)
component1 ! GoodByeMessage("goodbye 1")
transitionTo(State.Terminating)
}
}
def handleMessageWhenWaiting = {
case m => {
println(Thread.currentThread().getName + " I am waiting, I am not ready to run")
transitionTo(State.Running)
}
}
def handleMessageWhenTerminating = {
case m => {
println(Thread.currentThread().getName + " I am terminating, I cannot handle any message")
//need to shutdown here
}
}
}
object ActorExample extends App {
val a = new Component1
val b = new Component2(a)
b ! HelloMessage("hello World 2")
b ! HelloMessage("hello World 2, 2nd")
b ! GoodByeMessage("Good bye 2")
println(Thread.currentThread().getName)
}
You can look at Actor model implementation in scalazand take ideas from it, source code in scalaz actor is easier for insight than akka. You have freedom of choice about architecture: you can use mailboxes based on ConcurrentLinkedQueue like in Akka, use CAS for AtomicReffernce like in scalaz, in your case you use Future mechanism. IMO, you must write a context of your actor system, so solve first and second items in your question it's the variant of ActorContext:
val contextStack = new ThreadLocal[List[ActorContext]]
and shutdown can look like this:
1.
case Kill ⇒ throw new ActorKilledException("Kill")
case PoisonPill ⇒ self.stop()
2. For storing parent actor and similar task, you must store reference on them:
def parent: ActorRef
it's hard to say about advantages of every technique (CAS, mailboxes), it's possible variants to your research.
Here's the pattern I have come across:
An actor A has multiple children C1, ..., Cn. On receiving a message, A sends it to each of its children, which each do some calculation on the message, and on completion send it back to A. A would then like to combine the results of all the children to pass onto another actor.
What would a solution for this problem look like? Or is this an anti-pattern? In which case how should this problem be approached?
Here is a trivial example which hopefully illustrates my current solution. My concerns are that is duplicates code (up to symmetry); does not extend very well to 'lots' of children; and makes it quite hard to see what's going on.
import akka.actor.{Props, Actor}
case class Tagged[T](value: T, id: Int)
class A extends Actor {
import C1._
import C2._
val c1 = context.actorOf(Props[C1], "C1")
val c2 = context.actorOf(Props[C2], "C2")
var uid = 0
var c1Results = Map[Int, Int]()
var c2Results = Map[Int, Int]()
def receive = {
case n: Int => {
c1 ! Tagged(n, uid)
c2 ! Tagged(n, uid)
uid += 1
}
case Tagged(C1Result(n), id) => c2Results get id match {
case None => c1Results += (id -> n)
case Some(m) => {
c2Results -= id
context.parent ! (n, m)
}
}
case Tagged(C2Result(n), id) => c1Results get id match {
case None => c2Results += (id -> n)
case Some(m) => {
c1Results -= id
context.parent ! (m, n)
}
}
}
}
class C1 extends Actor {
import C1._
def receive = {
case Tagged(n: Int, id) => Tagged(C1Result(n), id)
}
}
object C1 {
case class C1Result(n: Int)
}
class C2 extends Actor {
import C2._
def receive = {
case Tagged(n: Int, id) => Tagged(C2Result(n), id)
}
}
object C2 {
case class C2Result(n: Int)
}
If you think the code looks god-awful, take it easy on me, I've just started learning akka ;)
In the case of many - or a varying number of - child actors, the ask pattern suggested by Zim-Zam will quickly get out of hand.
The aggregator pattern is designed to help with this kind of situation. It provides an Aggregator trait that you can use in an actor to perform your aggregation logic.
A client actor wanting to perform an aggregation can start an Aggregator based actor instance and send it a message that will kick off the aggregation process.
A new aggregator should be created for each aggregation operation and terminate on sending back the result (when it has received all responses or on a timeout).
An example of this pattern to sum integer values held by the actors represented by the Child class is listed below. (Note that there is no need for them to all be children supervised by the same parent actor: the SummationAggregator just needs a collection of ActorRefs.)
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
import akka.actor._
import akka.contrib.pattern.Aggregator
object Child {
def props(value: Int): Props = Props(new Child(value))
case object GetValue
case class GetValueResult(value: Int)
}
class Child(value: Int) extends Actor {
import Child._
def receive = { case GetValue => sender ! GetValueResult(value) }
}
object SummationAggregator {
def props = Props(new SummationAggregator)
case object TimedOut
case class StartAggregation(targets: Seq[ActorRef])
case object BadCommand
case class AggregationResult(sum: Int)
}
class SummationAggregator extends Actor with Aggregator {
import Child._
import SummationAggregator._
expectOnce {
case StartAggregation(targets) =>
// Could do what this handler does in line but handing off to a
// separate class encapsulates the state a little more cleanly
new Handler(targets, sender())
case _ =>
sender ! BadCommand
context stop self
}
class Handler(targets: Seq[ActorRef], originalSender: ActorRef) {
// Could just store a running total and keep track of the number of responses
// that we are awaiting...
var valueResults = Set.empty[GetValueResult]
context.system.scheduler.scheduleOnce(1.second, self, TimedOut)
expect {
case TimedOut =>
// It might make sense to respond with what we have so far if some responses are still awaited...
respondIfDone(respondAnyway = true)
}
if (targets.isEmpty)
respondIfDone()
else
targets.foreach { t =>
t ! GetValue
expectOnce {
case vr: GetValueResult =>
valueResults += vr
respondIfDone()
}
}
def respondIfDone(respondAnyway: Boolean = false) = {
if (respondAnyway || valueResults.size == targets.size) {
originalSender ! AggregationResult(valueResults.foldLeft(0) { case (acc, GetValueResult(v)) => acc + v })
context stop self
}
}
}
}
To use this SummationAggregator from your parent actor you could do:
context.actorOf(SummationAggregator.props) ! StartAggregation(children)
and then handle AggregationResult somewhere in the parent's receive.
You can use ? instead of ! on the child actors - this will cause the child actors to return a Future with their (eventual) results, i.e. everything is still non-blocking up until you Await the outcome of the Future. The parent actor can then compose these Futures and send it on to another actor - it will already know each Future's identity and so you won't need to worry about tagging each message so that you can put them back in order later. Here's a simple example where each child returns a random Double, and you want to divide the first child's return value by the second child's return value (i.e. order matters).
import scala.concurrent.duration._
import akka.actor.{Props, Actor}
import akka.pattern.{ask, pipe}
import akka.util.Timeout
class A extends Actor {
val c1 = context.actorOf(Props[C], "C1")
val c2 = context.actorOf(Props[C], "C2")
// The ask operation involves creating an internal actor for handling
// this reply, which needs to have a timeout after which it is
// destroyed in order not to leak resources; see more below.
implicit val timeout = Timeout(5 seconds)
def receive = {
case _ => {
val f1 = c1 ? "anything" // Future[Any]
val f2 = c2 ? "anything" // Future[Any]
val result: Future[Double] = for {
d1 <- f1.mapTo[Double]
d2 <- f2.mapTo[Double]
} yield d1 / d2
}
}
class C extends Actor {
def receive = {
case _ => // random Double
}
}
I am wondering if there is a better way to handle async initialization of values within an Actor. Actors of course are thread safe when inside the actor, but using Futures throws a wrinkle in that (and you have to make sure you don't close over context or sender) Consider the following:
class ExampleActor(ref1: ActorRef, ref2: ActorRef) extends Actor {
implicit val ec = context.dispatcher
val promise1 = Promise[Int]
val promise2 = Promise[Int]
def receive = {
case Request1.Response(x) => promise1.success(x)
case Request2.Response(y) => promise2.success(y)
case CombinedResponse(x, y) => x + y
}
promise1.future foreach { x =>
promise2.future foreach { y =>
self ! CombinedResponse(x, y)
}
}
ref1 ! Request1
ref2 ! Request2
}
Is there a better/more idiomatic way of handling parallel requests like this?
You actually don't need futures to handle multi-part response:
var x: Option[Int] = None
var y: Option[Int] = None
def receive = {
case Request1.Response(x) => x = Some(x); checkParts
case Request2.Response(y) => y = Some(y); checkParts
}
def checkParts = for {
xx <- x
yy <- y
} parent ! xx + yy
By the way, you may use for-comprehension in the same way even with futures.
More functional way to manage actor's state:
case class Resp1(x: Int)
case class Resp2(y: Int)
case class State(x: Option[Int], y: Option[Int])
class Worker(parent: ActorRef) extends Actor {
def receive = process(State(None, None))
def process(s: State): Receive = edge(s) andThen { sn =>
context become process(sn)
for {
xx <- sn.x
yy <- sn.y
} parent ! xx + yy //action
}
def edge(s: State): PartialFunction[Any, State] = { //managing state
case Resp1(x) => s.copy(x = Some(x))
case Resp2(y) => s.copy(y = Some(y))
}
}
Reusing the actor instead of creating a future is better because promise.success actually does a non-managable side-effect by submitting task into an executor, so it's not a pure functional way. Actor's state is better, as side-effect inside an actor is always consistent with other actor's - they're applied step-by-step and only in response to some message. So you may see the actor just as fold on infinite collection; state and messages (also infinite) sent by actor may be seen just as fold's accumulator.
Talking about Akka, its actors have some kind of IoC features like automatical exception handling (through supervising), which isn't available inside future. In your case, you have to introduce an additional composite message to return into the actor's IoC-context. Adding any other action than self ! CombinedResponse(x, y) (which, for example, may be accidentally done by some other developer to implement some workaround) is unsafe.