How to check scala actors status from main process? - scala

I am relatively new Scala actors. I have huge map,that is grouped into smaller blocks and executed through actors. Based on map size,the number of actors created vary. The actors work well and the process is completed. But how to check the status of the generated actors? In java i am familiar with use of thread-pool executor services. In Scala how this is done?

There are multiple ways to do what you want:
Have the worker actor send a message back to the sender to inform it that an operation completed. Each actor has a reference to the sender actor (the one that sent the message), which you can use to send back a completion message. The sender can then handle that message.
Instead of sending a message via a tell (e.g. actor ! msg), use ask, which returns a Future. You can setup a callback on the Future that runs upon completion.
If the worker actors are launched for a one-time operation, have it terminate itself by stopping it once the operation finishes. The parent actor (the one which created the worker actor) can monitor the worker via a DeathWatch mechanism that informs the parent when the child actor is terminated. In this approach, termination means the operation has been completed. However, you will need to keep track of how many terminations the parent receives in order to determine when all the worker actors have finished.
Which approach to use depends on your use case and nature of the operations. Most common and flexible approach is #1. Example (not tested):
case class PerformWork(i: Int)
case object WorkDone
class ParentActor(size: Int) extends Actor {
for (i <- 1 to size) {
val actor = context.actorOf(Props[Worker], s"worker$i")
actor ! PerformWork(i)
}
var result = 0
def receive = {
case WorkDone => {
result += 1
if (result == size) {
// work is done
}
}
}
}
class Worker extends Actor {
def receive = {
case m: PerformWork => {
// do some work
// ...
sender ! WorkDone
}
}
}

Related

Akka: Keep unmatched messages in the mailbox

I'm familiar with Erlang/Elixir, in which messages that are in a process' mailbox remain in the mailbox until they are matched:
The patterns Pattern are sequentially matched against the first message in time order in the mailbox, then the second, and so on. If a match succeeds and the optional guard sequence GuardSeq is true, the corresponding Body is evaluated. The matching message is consumed, that is, removed from the mailbox, while any other messages in the mailbox remain unchanged.
(http://erlang.org/doc/reference_manual/expressions.html#receive)
However, with Akka Actors unmatched messages are removed from the mailbox.
This is annoying when implementing for instance forks in a dining philosophers simulation:
import akka.actor._
object Fork {
def props(id: Int): Props = Props(new Fork(id))
final case class Take(philosopher: Int)
final case class Release(philosopher: Int)
final case class TookFork(fork: Int)
final case class ReleasedFork(fork: Int)
}
class Fork(val id: Int) extends Actor {
import Fork._
object Status extends Enumeration {
val FREE, TAKEN = Value
}
private var _status: Status.Value = Status.FREE
private var _held_by: Int = -1
def receive = {
case Take(philosopher) if _status == Status.FREE => {
println(s"\tPhilosopher $philosopher takes fork $id.")
take(philosopher)
sender() ! TookFork(id)
context.become(taken, false)
}
case Release(philosopher) if _status == Status.TAKEN && _held_by == philosopher => {
println(s"\tPhilosopher $philosopher puts down fork $id.")
release()
sender() ! ReleasedFork(id)
context.unbecome()
}
}
def take(philosopher: Int) = {
_status = Status.TAKEN
_held_by = philosopher
}
def release() = {
_status = Status.FREE
_held_by = -1
}
}
When a Take(<philosopher>) message is sent to the fork,
we want the message to stay in the mailbox until the fork is released and the message is matched. However, in Akka Take(<philosopher>) messages are dropped from the mailbox if the fork is currently taken, since there is no match.
Currently, I solve this problem by overriding the unhandled method of the Fork actor and forwarding the message to the fork again:
override def unhandled(message: Any): Unit = {
self forward message
}
I believe this is terribly inefficient as it keeps sending the message to the fork until it is matched. Is there another way to solve this problem which does not involve continuously forwarding unmatched messages?
I believe that worst case I will have to implement a custom mailbox type that mimics Erlang mailboxes, as described here: http://ndpar.blogspot.com/2010/11/erlang-explained-selective-receive.html
EDIT: I modified my implementation based on Tim's advice and I use the Stash trait as suggested. My Fork actor now looks as follows:
class Fork(val id: Int) extends Actor with Stash {
import Fork._
// Fork is in "taken" state
def taken(philosopher: Int): Receive = {
case Release(`philosopher`) => {
println(s"\tPhilosopher $philosopher puts down fork $id.")
sender() ! ReleasedFork(id)
unstashAll()
context.unbecome()
}
case Take(_) => stash()
}
// Fork is in "free" state
def receive = {
case Take(philosopher) => {
println(s"\tPhilosopher $philosopher takes fork $id.")
sender() ! TookFork(id)
context.become(taken(philosopher), false)
}
}
}
However, I don't want to write the stash() and unstashAll() calls everywhere. Instead, I want to implement a custom mailbox type that does this for me, i.e. stashes unhandled messages and unstashes them when a message has been processed by the actor. Is this possible?
I tried to implement a custom mailbox which does this, however, I can't determine whether a message did or did not match the receive block.
The problem with forward is that it may re-order the messages if there are multiple messages waiting to be processed, which is probably not a good idea.
The best solution here would seem to be to implement you own queue inside the actor that gives the semantics that you want. If you can't process a message immediately then put in on the queue, and when the next message arrives you can process as much of the queue as possible. This would also allow you to detect when senders give inconsistent messages (e.g. Release on a fork that they did not Take) which otherwise will just build up in the incoming mailbox.
I would not worry about efficiency until you can prove it is a problem, but it will be more efficient if each receive function only processes the messages that are relevant in that particular state.
I would avoid using var in the actor by putting the state in the parameters to the receive methods. And the _status value is implicit in the choice of receive handler and doesn't need to be stored as a value. The taken receive handler only needs to process Release messages and the main receive handler only needs to process Take messages.
There exists a sample project in the Akka repository that houses multiple implementations of the "Dining Philosophers" problem. A key difference between your approach and theirs is that they implement both the utensils and the philosophers as actors, whereas you define only the utensil as an actor. The sample implementations show how to model the problem without dealing with unhandled messages or using a custom mailbox.

How to ensure message consistency when using futures in Akka

I would like to understand how to work with a stateful actor when I have async calls within the action.
Consider the following actor:
#Singleton
class MyActor #Inject() () extends Actor with LazyLogging {
import context.dispatcher
override def receive: Receive = {
case Test(id: String) =>
Future { logger.debug(s"id [$id]") }
}
}
and a call to this actor:
Stream.range(1, 100).foreach { i =>
MyActor ! Test(i.toString)
}
This will give me an inconsistent printing of the series.
How am I supposed to use futures inside an actor without loosing the entire "one message after another" functionality?
You should store that Future in a var then on every next message you should make a flatMap call.
if(storedFut == null) storedFut = Future { logger.debug(s"id [$id]") }
else storedFut = storedFut.flatMap(_ => Future { logger.debug(s"id [$id]") })
flatMap is exactly for ordering of Futures.
Sidenote
If you want thing to happen in parallel you're in the nondeterministic zone, where you cannot impose ordering
What you're observing is not a violation of Akka's "message ordering per sender–receiver pair" guarantee; what you're observing is the nondeterministic nature of Futures, as #almendar mentions in his or her answer.
The Test messages are being sent to MyActor sequentially, from Test("1") to Test("100"), and MyActor is processing each message in its receive block in that same order. However, you're logging each message inside a Future, and the order in which those Futures are completed is nondeterministic. This is why you see the "inconsistent printing of the series."
To get the desired behavior of sequential logging of the messages, don't wrap the logging in a Future. If you must use a Future inside an actor's receive block, then #almendar's approach of using a var inside the actor is safe.
You can use context.become and stash messages, wait for the end of the future and process another message.
More about how to use stash with example you can find in documentation http://doc.akka.io/api/akka/current/akka/actor/Stash.html
Remember - messages ordering is guarantee only if messages are sent from the same machine because of network characteristic.
Another way would be to send itself a message on Future.onComplete assuming that there are no restrictions on the order of processing
//in receive
val future = Future { logger.debug(s"id [$id]") }
f.onComplete {
case Success(value) => self ! TestResult(s"Got the callback, meaning = $value")
case Failure(e) => self ! TestError(e)
}

How to test Akka actor that spawn child actors at runtime?

Following my other question, I have an UDP server actor as follow:
class Listener(addr: InetSocketAddress) extends Actor {
import context.system
IO(Udp) ! Udp.Bind(self, addr)
def spawnChild(remote): ActorRef = {
//Check if child already exist
context.actorOf(Props[Worker])
}
def receive = {
case Udp.Bound(local) =>
context.become(ready(sender()))
}
def ready(socket: ActorRef): Receive = {
case Udp.Received(data, remote) =>
val worker = spawnChild(remote)
worker ! data // forward data directly to child
case Udp.Unbind => socket ! Udp.Unbind
case Udp.Unbound => context.stop(self)
}
}
I am creating child actors based on where the data is sending from. The reason for this is to keep some internal state by child actor. Internal states include last connection time, total number of packet sent, etc
I want to setup TestProbes to test that
data from remoteA is forwarded to TestProbeA
data from remoteB is forwarded to TestProbeB
data from remoteA is not forwarded to TestProbeB
I have read the section Using multiple probes. However in my situation, it is the parent actor who is responsible of creating children.
How should I write my specs in this case? Or perhaps, how should I refactor my code to be more test-friendly?
The akka documentation has a section that outlines several ways that could be achieved: Testing parent child relationships
At work we've been successfully using the approach explained in Externalize child making from the parent. What that means in practice is that your parent actor takes a "child factory" parameter either at initialization time (or through a message).
In testing code you can provide a "fake" factory that will return a test probe instead of a pure actor.

Is it possible to receive or react to a message outside of act function?

I'm using RXTX library to send some data to serial port. After sending data I must wait 1 second for an ACK. I have this functionality implemented using an ArrayBlockingQueue:
Eg.
val queue = ArrayBlockingQueue(1)
def send(data2Send : Array[Byte]) : Array[Byte]{
out.write(data2Send)
queue.poll(1000)
}
def receive(receivedData : Array[Byte]){
queue.add(receivedData)
}
This works perfectly, but since I'm learning Scala I would like to use Actors insted of threads and locking structures.
My first attempt is as follows:
class Serial {
sender = new Sender(...)
new Receiver(...).start
class Sender {
def send(data2Send : Array[Byte]) : Array[Byte]{
out.write(data2Send)
receiveWithin(WAIT_TIMEOUT_MILLIS) {
case response => response
case TIMEOUT => null
}
}
}
class Receiver extends Actor{
def act{
loop{
sender ! read()
}
}
}
}
But this code throws a java.lang.AssertionError: assertion failed: receive from channel belonging to other actor. I think the problem is that I can't use receive or react outside the act definition. Is right the aproach that I'm following?
Second attempt:
class Serial {
new Sender(...).start
new Receiver(...).start
def send() = (sender ?! data2Send).asInstanceOf(Array[Byte])
class Sender {
def act() {
loop{
receive{
out.write(data2Send)
receiveWithin(WAIT_TIMEOUT_MILLIS) {
case response => response
case TIMEOUT => null
}
}
}
}
}
class Receiver extends Actor{
def act{
loop{
sender ! read()
}
}
}
}
In this second attempt I get java.util.NoSuchElementException: head of empty list when sender ! read() line is executed. And it looks a lot more complex
Unless you are using NIO, you can't avoid blocking in this situation. Ultimately your message is coming from a socket which you need to read() from (i.e. some thread, somewhere, must block).
Looking at your 3 examples (even assuming they all worked), if you found a bug at 2am in six months' time, which code snippet do you think you'd rather be looking at? I know which one I would choose!
Actors are great for sending asynchronous messages around event-driven systems. Where you hit some external API which uses sockets, you can wrap an actor-like facade around them, so that they can interact with other parts of the system (so that the rest of the system is protected from knowing the implementation details). A few pointers:
For the actual actor which has to deal with reading/writing to the socket, keep it simple.
Try and organize the system such that communication with this actor is asynchronous (i.e. other actors are not blocking and awaiting replies)
Given that the scala actor library is imminently being deprecated, I'd start using akka

Calling an blocking Actor from within an Actor

Given I invoke an Actor from inside react does this block the calling Actor or is it still processing other requests?
class worker extends Actor()
{
def act() = {
loop {
react {
msg =>
var foo = another_actor !? 'bar' //Block here?
println(foo)
}
}
}
!? always blocks the calling thread. if it is invoke in middle of actor react block, the calling actor is blocked as well.
The main idea of actors is a single thread of execution. An actor will not proceed to the next message until the current one is finished processing. this way, the developers do not need to worry about concurrency as long as the messages are immuatable.
See this question about the fact that actors cannot process messages simulatneously (i.e. each actor processes its inbox sequentially).
I certainly don't think that this is very clear from the explanations in Programming in Scala. You can (sort of) achieve what you want by creating an on-the-fly actor:
loop {
react {
case Command(args) =>
val f = other !! Request(args) //create a Future
//now create inline actor
val _this = self
actor { //this creates an on-the-fly actor
_this ! Result(args, f.get) //f.get is a blocking call
}
case Result(args, result) =>
//now process this
}
}
Of course, the on-the-fly actor will block, but it does leave your original actor able to process new messages. The actors subsystem will create new threads up to actors.maxPoolSize (default is 256) if all current pooled worker threads are busy.