Calling the !! method from one actor to another worker actor appears to keep the channel open even after the reply was received by the caller (ie: future is ready).
For example, using !! to send 11 different messages from one actor to another worker actor will result in 11 messages similar to the below being shown in the mailbox of the original caller, each with a different Channel#xxxx value.
!(scala.actors.Channel#11b456f,Exit(com.test.app.actor.QueryActor#4f7bc2,'normal))
Are these messages awaiting replies from the worker, as the original caller is sending an Exit message out upon it's own call to exit(), or are they generated on the other end, and for some reason have the print form shown above? By this point, the worker actor has already exited, so the original caller of !! will definitely never receive any replies.
This behavior is undesirable, as the original calling actor's mailbox fills with these exit messages (one for each channel created for each time !! was used).
How can this be stopped? Is the original caller automatically "linking" to the reply channels created on each !! call?
The reason why these Exit messages are sent to the original caller is that the caller links its temporary channel, which is used to receive the future result, to the callee. In particular, if the channel receives an exit signal, an Exit message is sent on that channel, which results in a message similar to what you describe to be sent to the actual caller (you can think of channels as tags on messages). This is done to allow (re-)throwing an exception inside the caller if the callee terminates before serving the future message send (the exception is thrown upon accessing the future).
The problem with the current implementation is that the caller receives an Exit message even if the future was already resolved. This is clearly a bug that should be filed on the Scala Trac. A possible fix is to only send an Exit message if the future has not been resolved, yet. In this case the Exit message will be removed whenever the future is accessed for the first time using apply or isSet.
Related
I am bit confused looking at the docs here and here
receive() is a method which accepts no parameters and returns a Partial function. tell() is a method which returns Unit and it 'sends' the message. Now for the message to be processed in my understanding 2 things have to happen:
The receive() should be invoked by tell
The message is passed to the partial function which is returned by the receive()
Now if the partial function is returned back to the place where tell() was used, then how does message based communication work? Why isn't the operation being performed inside the actor itself?
Because it is internals there is no documentation, but you can checkout source code yourself here: https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/actor/ActorCell.scala. All internals starting from how tell sends message to mailbox, how message is extracted from it, how receive is called and so on is here.
Hope it answers your question.
How can I test an actor? Since the calls are not synchronous, and one messages can cause multiple messages to be sent, what are the ways of testing it?
E.g. How can I test that my actor sent 3 messages in response to another message?
In general you cannot test what an actor has done unless it interacts with a trait or interface you provide in the construction or in an input message. So if you have an actor like the following.
actor MyActor
be do_stuff(receiver: MyReceiver)
You use a pattern where you combine a timer, for a timeout, and a wrapper actor that provides MyReceiver to test if the actor actually did send the message or sequence of messages that where expected.
Pony already includes the ponytest package with some support for this kind of tests. You can check the PonyTest actor at GitHub. Basically you have to specify a timeout and ensure one of your actors calls complete in the test helper for success. Note that the API has changed since the last stable version.
I'm using the Play Framework with scala. I'm new to scala, akka, and play.
This is my actor system. I'm not sure I'm doing this right, but I have 2 routers. 1 for Actor A, and 1 for Actor B:
val system = ActorSystem("ActionSystem")
val actorARouter = system.actorOf(Props[ActionParser].withRouter(
SmallestMailboxRouter(Runtime.getRuntime.availableProcessors())), name = "actorARouter")
val actorBRouter = system.actorOf(Props[ActionDispatcher].withRouter(
SmallestMailboxRouter(Runtime.getRuntime.availableProcessors())), name = "actorBRouter")
This is the current setup that I have:
The play framework provides a Controller for me that receives a http rest call with some json. Whenever the Controller receives a rest call, I do an ask sends the json to a router for Actor A. Here is what that looks like:
(actorARouter ? request.body.asJson.get).map {
case m: controllers.HttpMessages.OK => Ok(m.body)
case m: controllers.HttpMessages.HttpResponse => Status(m.status)(m.body)
}
Actor A then parses the json into a Seq of objects and then sends them via an ask to Actor B. Actor B is supposed to eventually process those by sending them to other actors, but for now is just returning generic responses.
The generic responses are being received back by ActorA via the future, then being parsed to JSON and then returned to the Controller via an OK response... or at least that's what is supposed to happen.
What's happening:
So what's happening is the controller sends to ActorA, ActorA sends to ActorB. ActorB sends generic responses to ActorA. ActorA parses generic responses into JSON and tries to do sender ! OK(json) but I get a message in the console saying it wasn't delivered as it's a "dead letter". when I debug into it when I look at sender, sender is a reference to the actor akka://ActionSystem/deadLetters
My questions:
Obviously I'm doing something wrong. Maybe I shouldn't be chaining these actors responses together like this. Again, I mentioned I only had plans to further this by having ActorB send out requests to other actors.
When I do an ask in an actor, that doesn't hog that thread and stop it from processing other messages while it's waiting for a response does it?
EDIT:
I found out I can save a reference to the sender for later use, and then send to that, and that seems to fix the dead letter problem. But I'm still very uncertain if this is the right way to be doing things. It feels like every time I'm adding another layer of actors 10's of milliseconds are being added onto my response time. Perhaps that's due to other factors though.
Without looking at your code, I cannot really comment on what caused the dead letter, from your edit I guess you closed over sender() instead of assigning it to a variable and closing over that.
To respond to your questions:
It is much easier to construct message flows with actors if you only use fire-and-forget messages. The ask Pattern is useful in some cases, but most of the time you should try to avoid it. What you can do instead is to pass the original sender along through your actors by using forward instead of tell. This way a response can be generated by the last actor in your message flow. The first actor only needs the code to handle the response, and does not need to care about generating the response. Nice separation of concerns right there. If you need to aggregate several responses in order to send out a single response afterwards, you can also use a temporary actor that all other actors will send their response to, and that knows the orignal sender. Temporary actors need to be stopped after doing their work.
As far as I know the ask pattern is asynchronous and uses temporary actors internally. However, if you wait for the result of the future in your actor, that will block that actor, and it will not be able to process further messages. A nice method to use the ask pattern is in combination with the pipeTo Pattern, which you can use to send the result of ask to an actor (usally self)
I am using pseudo-synchronous sockets in a Windows Phone 7 application. My socket code is based on the sample from http://msdn.microsoft.com/en-us/library/hh202858(v=vs.92).aspx.
The server's sending pattern is somewhat unpredictable. It starts with a fixed-size header that contains the length of the rest of the message. I first read in this header, and then I read the specified number of bytes from the socket.
Since I need to send messages to the server as well, and my attempts at duplexing the socket with a thread for receiving and another thread for sending caused lots of problems, I have a loop like this in my code:
while (KeepConnectionGoing)
{
byte[] Rcvd;
Rcvd = Socket.Receive();//Returns null if no message received in 50 ms
if (Rcvd != null)
{
ParseMessage(Rcvd);
}
if (HasMessageThatNeedsToBeSent())
{
byte[] Message = GetMessageToSend();
Socket.Send(Message);
}
}
This works fine for the majority of the time, but strange things happen when the message is null.
Because the timeout in the Receive method (see the linked sample) uses a ManualResetEvent, the receive request on the socket is never actually cancels. Even though the method returns, that request waits around somewhere, and when data is available on the socket, chomps up the header. The event handler has nothing to do with the data it received (since the method has returned and the variables in the method will never be used again), the data basically disappears. The read request I expect to return the header skips reads the bytes after the header, and I have no idea how long the message is.
I'd like to be able to cancel all outstanding requests if the socket times out. I am using anonymous methods like in the sample since it simplifies everything and prevents me from having to write all the state transfer code myself. Thus, I cannot unhook the event handler. I think though, that even if I were using a method as the event handler, but unhooking before the asynchronous operation is done, the callback method would still be called. (I haven't tested this, it's just my understanding)
Right now, the only solution I can see is hacking together some static byte arrays (ie. having a static byte[] Header and if it is null, I read the header, otherwise I read the message), but that seems like a really inelegant solution and very prone to race conditions.
Is there a better way?
Thanks
It appears there really is no good way to do this. A poll method would be nice, but Silverlight doesn't have it. I hacked together a solution using static flags to tell me what state I am in (Has the header been requested, has the message been requested), a static int for the length and a static buffer.
At the beginning of the method, either the header or the body can be requested. If the header has already been requested, the thread waits until a valid body length is available. If this wait times out, that means that the header receive operation is still pending, but there really is no message available. Otherwise, it reads in that length of a message.
If the header has not been requested, receive the header. In the event handler, after completion, check to see if the control flow has already continued (i.e. the receive operation took too long, so the function returned already, but is now actually done). Update the length, then request the body unless it timed out.
While writing Scala RemoteActor code, I noticed some pitfalls:
RemoteActor.classLoader = getClass().getClassLoader() has to be set in order to avoid "java.lang.ClassNotFoundException"
link doesn't always work due to "a race condition for which a NetKernel (the facility responsible for forwarding messages remotely) that backs a remote actor can close before the remote actor's proxy (more specifically, proxy delegate) has had a chance to send a message remotely indicating the local exit." (Stephan Tu)
RemoteActor.select doesn't always return the same delegate (RemoteActor.select - result deterministic?)
Sending a delegate over the network prevents the application to quit normally (RemoteActor unregister actor)
Remote Actors won't terminate if RemoteActor.alive() and RemoteActor.register() are used outside act. (See the answer of Magnus)
Are there any other pitfalls a programmer should be aware of?
Here's another; you need to put your RemoteActor.alive() and RemoteActor.register() calls inside your act method when you define your actor or the actor won't terminate when you call exit(); see How do I kill a RemoteActor?