Is dead letter box reusable in akka? - scala

I've made following code:
package com.star.wars
import akka.actor._
import akka.actor.SupervisorStrategy._
import akka.util.duration._
object Test extends App {
case object Kill
case object Create
class Luke extends Actor {
var x: Int = 0
println("Luke here")
Thread.sleep(1000)
println("Luke here2")
def receive = {
case Kill => 1/0// context.stop(self)
case msg: String => {
x += 1
println(x + msg)
}
}
}
class Vader extends Actor {
println("Vader here")
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 1 minute) {
case _: ArithmeticException => Restart
case _: NullPointerException => Restart
case _: IllegalArgumentException => Restart
case _: Exception => Restart
}
def receive = {
case Create => context.actorOf(Props(new Luke), name = "Luke")
case Kill => {
val luke = context.actorFor("/user/Vader/Luke")
luke ! "Pre hi there"
luke ! Kill
luke ! "Post hi there"
println("Pre -> Kill -> Post sent to Luke")
}
}
}
val system = ActorSystem("MySystem")
val vader = system.actorOf(Props(new Vader), name = "Vader")
vader ! Create
vader ! Kill
println("Create -> Kill sent to Vader")
}
Purpose of this code, is to prove, that while Luke is restarting, his dead letter box can receive messages, and when Luke comes online again, he can receive messages sent to him while he was absent.
Output seems ok. It is a proof in a way:
Create -> Kill sent to Vader
Vader here
Luke here
Pre -> Kill -> Post sent to Luke
Luke here2
1Pre hi there
[ERROR] [01/12/2015 00:32:02.74] [MySystem-akka.actor.default-dispatcher-3] [akka://MySystem/user/Vader/Luke] / by zero
java.lang.ArithmeticException: / by zero
at com.sconysoft.robocode.Test$Luke$$anonfun$receive$1.apply(test.scala:21)
at com.sconysoft.robocode.Test$Luke$$anonfun$receive$1.apply(test.scala:20)
at akka.actor.Actor$class.apply(Actor.scala:318)
at com.sconysoft.robocode.Test$Luke.apply(test.scala:12)
at akka.actor.ActorCell.invoke(ActorCell.scala:626)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:197)
at akka.dispatch.Mailbox.run(Mailbox.scala:179)
at akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec(AbstractDispatcher.scala:516)
at akka.jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:259)
at akka.jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975)
at akka.jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1479)
at akka.jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Luke here
Luke here2
1Post hi there
However, I am not sure is it always truth. I can't find it in akka documentation, so is my reasoning ok ? Is dead letter box reusable in akka (after actor restart) ?
Btw. How can I handle Luke's context.stop(self) in Vader's supervisorStrategy ?

This is not connected to dead letters, see Message Delivery Reliability. In Akka, when using In-JVM-messaging - you have guarantee that message will be delivered with high-probability in most cases (restarting after exception is one of those). Messages that could not be delivered (like messages to voluntary stopped or never existed actor) are going to the DeadLetters box, but you should subscribe to them explicitly, which is not you've got here. You had just received the messages from your own actor's mailbox (as box wasn't removed during restarting - only actor's instance). You need to explicitly subscribe to the corresponding Event Stream to watch deadLetters.
You can't process context.stop(self) inside supervisorStrategy as it's voluntary termination (which actually will cause messages going to deadLetters) not exceptional situation (failure). So 1/0 and context.stop(self) are very different. For listening child's lifecycle see - What Lifecycle Monitoring Means
For example, let's see what if you really put context.stop(self) into the code instead of 1/0:
Luke here
Pre -> Kill -> Post sent to Luke
Luke here2
1Pre hi there
[INFO] [01/12/2015 09:20:37.325] [MySystem-akka.actor.default-dispatcher-4] [akka://MySystem/user/Vader/Luke] Message [java.lang.String] from Actor[akka://MySystem/user/Vader#-1749418461] to Actor[akka://MySystem/user/Vader/Luke#-1436540331] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
So that's where ("blabla was not delivered" in logging) deadLetters mailbox is used.
Anyway, any delivery (including dead-letters as it's just a synthetic actor) is based on best effort principle for in-JVM messaging, so there is no absolute guarantees:
The Akka test suite relies on not losing messages in the local context
(and for non-error condition tests also for remote deployment),
meaning that we actually do apply the best effort to keep our tests
stable. A local tell operation can however fail for the same reasons
as a normal method call can on the JVM:
StackOverflowError
OutOfMemoryError
other VirtualMachineError
In addition, local sends can fail in Akka-specific ways:
if the mailbox does not accept the message (e.g. full BoundedMailbox)
if the receiving actor fails while processing the message or is already terminated
While the first is clearly a matter of configuration the second
deserves some thought: the sender of a message does not get feedback
if there was an exception while processing, that notification goes to
the supervisor instead. This is in general not distinguishable from a
lost message for an outside observer.
In case of network-messaging you have no delivery guarantee at all. Luke could never know who is his father. Why? Because it's faster and actually Nobody Needs Reliable Messaging:
The only meaningful way for a sender to know whether an interaction
was successful is by receiving a business-level acknowledgement
message, which is not something Akka could make up on its own (neither
are we writing a “do what I mean” framework nor would you want us to).

Related

Akka TCP IO sends me connect message twice

I have a simple client server program
The server listens as follows
val manager = IO(Tcp)
manager ! Bind(self, myAddress,1,options)
Then in receive loop
override def receive = {
case b # Bound(addr) =>
{
log.info("bound")
myAddress = addr
bBound = true
}
case c # Connected(remoteAddress,localAddress) =>
log.info("Client Connected. Remote: {} Local: {}", remoteAddress, localAddress)
myAPAddress = remoteAddress
remoteConnection = sender()
remoteConnection ! Register(self,keepOpenOnPeerClosed=true)
//first thing to do is to register yourself with a lookup
mLookupManager ! AddMe(myAddress, this.context.self.path)
However the connect message is being received twice.
The server actor is not getting restarted as I have overloaded preRestart and it is not getting called. The problem is that the lookup manager looks up and if finds an actorpath with same socket address then sends a poison pill to it. And then adds new actor to it.
However In this case it kills the same actor and adds it's actorpath
Why would I get the connect message twice? Any clue?
When Bind has successfully completed (you got Bound back) the server actor it will get a Connect message for every new connection that is made to the socket.
The common pattern is to let the Connect message trigger creation of a new actor who will be responsible for interacting with that specific client, rather than having the "server" actor doing the interaction.
See the docs here for a sample that does exactly that: http://doc.akka.io/docs/akka/2.4.14/scala/io-tcp.html#Accepting_connections

Akka: send error from routee back to caller

In my project, I created UserRepositoryActor which create their own router with 10 UserRepositoryWorkerActor instances as routee, see hierarchy below:
As you see, if any error occur while fetching data from database, it will occur at worker.
Once I want to fetch user from database, I send message to UserRepositoryActor with this command:
val resultFuture = userRepository ? FindUserById(1)
and I set 10 seconds for timeout.
In case of network connection has problem, UserRepositoryWorkerActor immediately get ConnectionException from underlying database driver and then (what I think) router will restart current worker and send FindUserById(1) command to other worker that available and resultFuture will get AskTimeoutException after 10 seconds passed. Then some time later, once connection back to normal, UserRepositoryWorkerActor successfully fetch data from database and then try to send result back to the caller and found that resultFuture was timed out.
I want to propagate error from UserRepositoryWorkerActor up to the caller immediately after exception occur, so that will prevent resultFuture to wait for 10 seconds and stop UserRepositoryWorkerActor to try to fetch data again and again.
How can I do that?
By the way, if you have any suggestions to my current design, please suggest me. I'm very new to Akka.
Your assumption about Router resending the message is wrong. Router has already passed the message to routee and it doesnt have it any more.
As far as ConnectionException is concerned, you could wrap in a scala.util.Try and send response to sender(). Something like,
Try(SomeDAO.getSomeObjectById(id)) match {
case Success(s) => sender() ! s
case Failure(e) => sender() ! e
}
You design looks correct. Having a router allows you to distribute work and also to limit number of concurrent workers accessing the database.
Option 1
You can make your router watch its children and act accordingly when they are terminated. For example (taken from here):
import akka.routing.{ ActorRefRoutee, RoundRobinRoutingLogic, Router }
class Master extends Actor {
var router = {
val routees = Vector.fill(5) {
val r = context.actorOf(Props[Worker])
context watch r
ActorRefRoutee(r)
}
Router(RoundRobinRoutingLogic(), routees)
}
def receive = {
case w: Work =>
router.route(w, sender())
case Terminated(a) =>
router = router.removeRoutee(a)
val r = context.actorOf(Props[Worker])
context watch r
router = router.addRoutee(r)
}
}
In your case you can send some sort of a failed message from the repository actor to the client. Repository actor can maintain a map of worker ref to request id to know which request failed when worker terminates. It can also record the time between the start of the request and actor termination to decide whether it's worth retrying it with another worker.
Option 2
Simply catch all non-fatal exceptions in your worker actor and reply with appropriate success/failed messages. This is much simpler but you might still want to restart the worker to make sure it's in a good state.
p.s. Router will not restart failed workers, neither it will try to resend messages to them by default. You can take a look at supervisor strategy and Option 1 above on how to achieve that.

Why do I get 'Handshake timed out'? How can I configure timeout period?

I am new in Akka actors and I am doing some tests. Suppose I have actors performing long running tasks like following:
override def receive = {
case email: Email => /*Future*/ {
Thread sleep 3000
}
}
I ran a stress test (remote actos on another machine in network) and I receive the following error:
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://EmailSystem#192.168.1.6:5000]
Caused by: akka.remote.transport.AkkaProtocolException: No response from remote. Handshake timed out
How can I configure this to don't get this error again? Should I use a future in the receive method instead of normal code (as on comment above)? What is the impact of doing that?
It's a really bad idea to have an actor that blocks for a long time like that, since it cannot respond to messages and additionally the akka default threadpool is one thread per core of your computer so you might also be stopping other actors from processing any messages.
Fork that blocking job on a separate execution context/thread pool instead (and make sure to limit how many threads there is in that threadpool). You can then notify the actor using pipeTo:
import akka.pattern.pipe
case email: Email =>
val futureEmail = Future {
... send email and then ...
EmailSent()
}
futureEmail pipeTo sender

Akka timeouts starting event handlers

When I run my app using akka, it fails with the following exception:
Event Handler specified in config can't be loaded [com.despegar.hasp.impl.DummyLogEventHandler] due to [a06c8d75-0f07-40db-883a-16dc2914934bakka.event.Logging$LoggerInitializationException: Logger log1-DummyLogEventHandler did not respond with LoggerInitialized, sent instead [TIMEOUT]
DummyLogEventHandler is defined as:
class DummyLogEventHandler extends Actor {
def receive = {
case InitializeLogger(_) => sender ! LoggerInitialized
case Error(cause, logSource, logClass, message) =>
case Warning(logSource, logClass, message) =>
case Info(logSource, logClass, message) =>
case Debug(logSource, logClass, message) =>
}
}
My configuration has the following lines:
event-handlers = ["my.app.DummyLogEventHandler"]
event-handler-startup-timeout = 15s
But I've also tried with the default logger:
event-handlers = []
and with slf4j (my app is using the logback backend and logging works ok):
event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]
Neither of those event-handlers nor incrementing the timeout to 60 seconds have worked so far. Moreover, the timeout is thrown sporadically. When I ran a test suite, the exception is thrown in different tests every time.
Can you help me find a solution?
Thanks,
Alex.
Try to set a larger timeout for the logger initialization :
For example in the conf file :
akka {logger-startup-timeout = 25s }
checkout the doc
http://doc.akka.io/docs/akka/snapshot/general/configuration.html
After discussion on akka-user the following was found.
This problem is a symptom of configuring the akka.actor.default-dispatcher to be of type = BalancingDispatcher, which cannot work, see the docs for that dispatcher type:
All the actors share a single Mailbox that they get their messages from.
It is assumed that all actors using the same instance of this dispatcher can process all messages that have been sent to one of the actors; i.e. the actors belong to a pool of actors, and to the client there is no guarantee about which actor instance actually processes a given message.
Sharability: Actors of the same type only
In my case, adding the timeout setting in conf does not work, but setting system property directly like below works.
System.setProperty("akka.logger-startup-timeout", "30s")

Implement timeout in actors

I am new to scala and actors. I need to implement such hypothetical situation:
Server wait for messages, if it does not get any in say 10s period of time, it sends message to the Client. Otherwise it receives messages incoming. If it is inside processing some message and another message comes, it needs to be queued (I suppose that is done automatically by scala actors).
The second problem I am encountering is Sleeping. I need the actor to sleep for some constant period of time when it receives the message. But on the other hand I can't block, as I want incoming messages to be queued for further processing.
How about this?
loop {
reactWithin(10000) {
case TIMEOUT => // send message to client
case work => // do work
}
}
Daniel has provided a better answer to the no-input condition part of the question. So I've edited out my inferior solution.
As to the delayed response part of the question, the message queue doesn't block while an actor sleeps. It can just sleep and messages will still accumulate.
However, if you want a fixed delay from when you receive a message to when you process it, you can, for example, create an actor that works immediately but wraps the message in a request for a delay:
case class Delay(when: Long, what: Any) { }
// Inside class DelayingActor(workingActor: Actor)
case msg => workingActor ! Delay(delayValue + System.currentTimeMillis , msg)
Then, the working actor would
case Delay(t,msg) =>
val t0 = System.currentTimeMillis
if (t>t0) Thread.sleep( t - t0 )
msg match {
// Handle message
}