I'm building an app with two kind of nodes (front and back) on akka 2.5.1, I'm using akka sharding for load and data distribution across the back nodes. The front node uses a shard proxy to send messages to the back. Shards initialisation is as follow:
val renditionManager: ActorRef =
if(nodeRole == "back")
clusterSharding.start(
typeName = "Rendition",
entityProps = Manger.props,
settings = ClusterShardingSettings(system),
extractEntityId = Manager.idExtractor,
extractShardId = Manager.shardResolver)
else
clusterSharding.startProxy(
typeName = "Rendition",
role = None,
extractEntityId = Manager.idExtractor,
extractShardId = Manager.shardResolver)
And i got some dead letters logs (omit most of the entries for brevity):
[info] [INFO] [06/02/2017 11:39:13.770] [wws-renditions-akka.actor.default-dispatcher-26] [akka://wws-renditions/system/sharding/RenditionCoordinator/singleton/coordinator] Message [akka.cluster.sharding.ShardCoordinator$Internal$Register] from Actor[akka.tcp://wws-renditions#127.0.0.1:2552/system/sharding/Rendition#1607279929] to Actor[akka://wws-renditions/system/sharding/RenditionCoordinator/singleton/coordinator] was not delivered. [8] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[info] [INFO] [06/02/2017 11:39:15.607] [wws-renditions-akka.actor.default-dispatcher-21] [akka://wws-renditions/system/sharding/RenditionCoordinator/singleton/coordinator] Message [akka.cluster.sharding.ShardCoordinator$Internal$RegisterProxy] from Actor[akka://wws-renditions/system/sharding/Rendition#-267271026] to Actor[akka://wws-renditions/system/sharding/RenditionCoordinator/singleton/coordinator] was not delivered. [9] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[info] [INFO] [06/02/2017 11:39:15.762] [wws-renditions-akka.actor.default-dispatcher-21] [akka://wws-renditions/system/sharding/replicator] Message [akka.cluster.ddata.Replicator$Internal$Status] from Actor[akka.tcp://wws-renditions#127.0.0.1:2552/system/sharding/replicator#-126233532] to Actor[akka://wws-renditions/system/sharding/replicator] was not delivered. [10] dead letters encountered, no more dead letters will be logged. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
and if i tried to use the proxy it fails to deliver and shows:
[info] [WARN] [06/02/2017 12:12:28.047] [wws-renditions-akka.actor.default-dispatcher-15] [akka.tcp://wws-renditions#127.0.0.1:2551/system/sharding/Rendition] Retry request for shard [51] homes from coordinator at [Actor[akka.tcp://wws-renditions#127.0.0.1:2552/system/sharding/RenditionCoordinator/singleton/coordinator#-1550443839]]. [1] buffered messages.
In the other hand, if I start a non-proxy shard in both nodes (front and back) it works properly.
Any advise? Thanks.
UPDATE
I finally figure it out why it was trying to connect to shards in wrong nodes. If it is only intended to start a shard in a single kind of node it is needed to add the following configuration
akka.cluster.sharding {
role = "yourRole"
}
This way, akka sharding only will lookup on nodes tagged with role "yourRole"
Proxy is still not able to connect with shard coordinator and deliver messages to shards and got the following log trace:
[WARN] [06/06/2017 12:09:25.754] [cluster-nodes-akka.actor.default-dispatcher-16] [akka.tcp://cluster-nodes#127.0.0.1:2551/system/sharding/Manager] Retry request for shard [52] homes from coordinator at [Actor[akka.tcp://cluster-nodes#127.0.0.1:2552/system/sharding/ManagerCoordinator/singleton/coordinator#-2111378619]]. [1] buffered messages.
so help would be nice :)
Got it!
I made 2 mistakes, for the first one check the UPDATE section in the main question.
The second is due to, for some reason, it is needed 2 shard regions up within the cluster (For testing purposes I was using only one), no clue if this is stated somewhere in the Akka docs.
Related
I've played around with lagom-scala-word-count Activator template and I was forced to kill the application process. Since then embedded kafka doesn't work - this project and every new I create became unusable. I've tried:
running sbt clean, to delete embedded Kafka data
creating brand new project (from other activator templates)
restarting my machine.
Despite this I can't get Lagom to work. During first launch I get following lines in log:
[warn] o.a.k.c.NetworkClient - Error while fetching metadata with correlation id 1 : {wordCount=LEADER_NOT_AVAILABLE}
[warn] o.a.k.c.NetworkClient - Error while fetching metadata with correlation id 2 : {wordCount=LEADER_NOT_AVAILABLE}
[warn] o.a.k.c.NetworkClient - Error while fetching metadata with correlation id 4 : {wordCount=LEADER_NOT_AVAILABLE}
[warn] a.k.KafkaConsumerActor - Consumer interrupted with WakeupException after timeout. Message: null. Current value of akka.kafka.consumer.wakeup-timeout is 3000 milliseconds
[warn] a.k.KafkaConsumerActor - Consumer interrupted with WakeupException after timeout. Message: null. Current value of akka.kafka.consumer.wakeup-timeout is 3000 milliseconds
Next launches result in:
[info] Starting Kafka
[info] Starting Cassandra
....Kafka Server closed unexpectedly.
....
[info] Cassandra server running at 127.0.0.1:4000
I've posted full server.log from lagom-internal-meta-project-kafka at https://gist.github.com/szymonbaranczyk/a93273537b42aafa45bf67446dd41adb.
Is it possible that some corrupted Embedded Kafka's data is stored globally on my pc and causes this?
For future reference, as James mentioned in comments, you have to delete the folder lagom-internal-meta-project-kafka in target/lagom-dynamic-projects. I don't know why it's not get deleted automatically.
I'm using Remote Actor and trying to achieve logging When...
A connection to the remote server is established.
The established connection is disconnected.
And hopefully, I would like to give a random Id every time Actor tries to reestablish.
My attempt is below.
class SampleActor #Inject() (implicit system: ActorSystem, ec: ExecutionContext) extends Actor with ActorLogging {
val remote = context.actorSelection(s"akka.tcp://PathToRemoteActor")
val initialIdentifyId = getNextId
remote ! Identify(initialIdentifyId)
def receive = establish(initialIdentifyId)
def doIdentify(nextId: Int) = {
context become establish(nextId)
system.scheduler.scheduleOnce(10.seconds) { remote ! Identify(nextId) }
}
def getNextId = new Random().nextInt(10000)
def establish(identifyId: Int): Actor.Receive = {
case ActorIdentity(`identifyId`, Some(ref)) ⇒
log.info(s"Connection to Remote Actor established with identifyId: $identifyId")
context.watch(ref)
context.become(active(identifyId, ref))
case ActorIdentity(`identifyId`, None) ⇒
log.info(s"Connecting to Remote Actor failed. identifyId: $identifyId")
doIdentify(getNextId)
case foo:Foo ⇒ //do something
}
def active(identifyId: Int, remoteRef: ActorRef): Actor.Receive = {
case Terminated(`remoteRef`) ⇒
context.unwatch(remoteRef)
log.error(s"Remote Actor was disconnected. identifyId: $identifyId")
doIdentify(getNextId)
case foo:Foo => //do something with the remote actor.
}
}
code above is client side of remote actor.
This works as I expected except that a line system.scheduler.scheduleOnce(10.seconds) { remote ! Identify(nextId) } seems triggering some sort of heartbeat connecting implicitly.
If I keep shutting down the remote actor for while and then start running it and then shutting down once again after while. akka debug log would be something like below.
[warn] a.r.ReliableDeliverySupervisor - Association with remote system [akka.tcp://foo-remote#127.0.0.1:2552] has failed, address is now gated for [5000] ms. Reason: [Disassociated]
[info] a.r.RemoteActorRefProvider$RemoteDeadLetterActorRef - Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://foo-client/system/remote-watcher#1679701185] to Actor[akka://foo-client/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[info] a.r.RemoteActorRefProvider$RemoteDeadLetterActorRef - Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://foo-client/system/remote-watcher#1679701185] to Actor[akka://foo-client/deadLetters] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[info] a.r.RemoteActorRefProvider$RemoteDeadLetterActorRef - Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://foo-client/system/remote-watcher#1679701185] to Actor[akka://foo-client/deadLetters] was not delivered. [3] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[info] a.r.RemoteActorRefProvider$RemoteDeadLetterActorRef - Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://foo-client/system/remote-watcher#1679701185] to Actor[akka://foo-client/deadLetters] was not delivered. [4] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[info] a.r.RemoteActorRefProvider$RemoteDeadLetterActorRef - Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://foo-client/system/remote-watcher#1679701185] to Actor[akka://foo-client/deadLetters] was not delivered. [5] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
...
If I remove the line system.scheduler.scheduleOnce(10.seconds) { remote ! Identify(nextId) }, only one heart beat dead letter log is produced.
How Can I avoid the multiple heartbeat?
maybe the way I use Identify(nextId) is wrong?
Thanks in advance.
I'm using ReactiveMongo 0.11.14 with Play 2.5.8. When I try to shut down reactivemongo, I get a bunch of akka messages about dead letters. The lost messages all seem to relate directly to closing down. (They are all of class ChannelClosed or ChannelDisconnected.)
How do I properly shut down reactivemongo to avoid these annoying messages?
Here's the way I'm trying to do it now:
for (connection <- reactiveMongoApi.driver.connections) {
//close them sequentially just to be extra safe
Await.ready(connection.askClose()(5.seconds), 5.seconds)
}
reactiveMongoApi.driver.close(5.seconds)
Here's two examples of the dead-letter messages I get:
[INFO] [10/06/2016 15:14:07.387] [reactivemongo-akka.actor.default-dispatcher-7] [akka://reactivemongo/user/Connection-2] Message [reactivemongo.core.actors.ChannelClosed] from Actor[akka://reactivemongo/deadLetters] to Actor[akka://reactivemongo/user/Connection-2#1711870153] was not delivered. [9] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [10/06/2016 15:14:07.387] [reactivemongo-akka.actor.default-dispatcher-7] [akka://reactivemongo/user/Connection-2] Message [reactivemongo.core.actors.ChannelDisconnected] from Actor[akka://reactivemongo/deadLetters] to Actor[akka://reactivemongo/user/Connection-2#1711870153] was not delivered. [10] dead letters encountered, no more dead letters will be logged. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
I've tried closing the connections after the driver, skipping connection close() altogether, and initiating a shutdown via driver.system.terminate() instead of driver.close().
I want to start using actors with heavy rate messages . actor's last state is very important
I was following the persistence example shown at here http://doc.akka.io/docs/akka/2.3.9/scala/persistence.html#event-sourcing
I tried to send a heavy load of messages
for (i <-0 to 100000){
persistentActor ! Cmd("foo"+i)
}
and using the persistAsync like this
val receiveCommand: Receive = {
case Cmd(data) =>
persistAsync(Evt(s"${data}-${numEvents}"))(updateState)
case "snap" => saveSnapshot(state)
case "print" => println(state)
}
before shutdown I added Thread.sleep(150000) to make sure that all persists . at first all seems to be ok , however re-running the app shows that some are going to dead-letter
> [INFO] [02/03/2015 15:35:18.187]
> [example-akka.actor.default-dispatcher-3]
> [akka://example/user/persistentActor-4-scala] Message
> [java.lang.String] from Actor[akka://example/deadLetters] to
> Actor[akka://example/user/persistentActor-4-scala#1206460640] was not
> delivered. [1] dead letters encountered. This logging can be turned
> off or adjusted with configuration settings 'akka.log-dead-letters'
> and 'akka.log-dead-letters-during-shutdown'. [INFO] [02/03/2015
> 15:35:18.192] [example-akka.actor.default-dispatcher-3]
> [akka://example/user/persistentActor-4-scala] Message
> [akka.persistence.Recover] from
> Actor[akka://example/user/persistentActor-4-scala#1206460640] to
> Actor[akka://example/user/persistentActor-4-scala#1206460640] was not
> delivered. [2] dead letters encountered. This logging can be turned
> off or adjusted with configuration settings 'akka.log-dead-letters'
> and 'akka.log-dead-letters-during-shutdown'.
or getting something like :
----------
[INFO] [02/03/2015 15:54:32.732] [example-akka.actor.default-dispatcher-11] [akka://example/user/persistentActor-4-scala] Message [akka.persistence.JournalProtocol$ReplayedMessage] from Actor[akka://example/deadLetters] to Actor[akka://example/user/persistentActor-4-scala#-973984210] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [02/03/2015 15:54:32.735] [example-akka.actor.default-dispatcher-3] [akka://example/user/persistentActor-4-scala] Message [akka.persistence.JournalProtocol$ReplayedMessage] from Actor[akka://example/deadLetters] to Actor[akka://example/user/persistentActor-4-scala#-973984210] was not delivered. [10] dead letters encountered, no more dead letters will be logged. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
A fatal error has been detected by the Java Runtime Environment:
SIGSEGV (0xb) at pc=0x00007fa2a3e06b6a, pid=18870, tid=140335801857792
JRE version: Java(TM) SE Runtime Environment (7.0_71-b14) (build 1.7.0_71-b14)
Java VM: Java HotSpot(TM) 64-Bit Server VM (24.71-b01 mixed mode linux-amd64 compressed oops)
Problematic frame:
V [libjvm.so+0x97bb6a] Unsafe_GetNativeByte+0xaa
Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
An error report file with more information is saved as:
/home/tadmin/projects/akka-sample-persistence-scala/hs_err_pid18870.log
If you would like to submit a bug report, please visit:
http://bugreport.sun.com/bugreport/crash.jsp
========================================================================
how can I persist a state of an actor that should handle heavy rate of messages ?
I suspect that the default database, levelDB, is the problem, and is most certainly what is core dumping. Are you by any chance writing to the db with more than one actor concurrently? In my experience, I've seen it core dump in that scenario. You could try it in shared mode, but I just plugged in a different database and the problems went away. In my case I used Cassandra.
I would try an in-memory journal plugin for akka-persistence. It is very easy to swap in. If the problems go away, then you know that levelDB is the problem. If so, go with a different database.
I am following this tutorial
http://alvinalexander.com/scala/simple-akka-actors-remote-example
I am following it as it is but my program is not running it gives me errors and i am confused about this line :
val remote = context.actorFor("akka://HelloRemoteSystem#127.0.0.1:5150/user/RemoteActor")
What do i have to write in place of "user" ? When I write the full path to the end for example:
val remote = context.actorFor("akka://HelloRemoteSystem#127.0.0.1:5150/sw/opt/programs/akka/akkaremoting/RemoteActor")
and run both hellolocal and helloremote both give me errors about looking up the actor for that address.
and if i write the code as it is it gives me error
helloremote erros :
[INFO] [10/27/2014 16:06:23.736] [HelloRemoteSystem-akka.actor.default-dispatcher-2] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Actor[akka://HelloRemoteSystem/user/RemoteActor#911921687] to Actor[akka://HelloRemoteSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
^Csarawaheed#ubuntu:/opt/ifkaar/programs/akka/akkaremoting/helloremote$ sbt run
[info] Loading project definition from /opt/ifkaar/programs/akka/akkaremoting/helloremote/project
[info] Set current project to helloremote (in build file:/opt/ifkaar/programs/akka/akkaremoting/helloremote/)
[info] Running HelloRemote
Remote Actor receive messgage : The remote actor is alive
[INFO] [10/27/2014 17:24:06.136] [HelloRemoteSystem-akka.actor.default-dispatcher-2] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Actor[akka://HelloRemoteSystem/user/RemoteActor#-792263999] to Actor[akka://HelloRemoteSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
hellolocal erros :
[INFO] [10/27/2014 16:06:23.736] [HelloRemoteSystem-akka.actor.default-dispatcher-2] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Actor[akka://HelloRemoteSystem/user/RemoteActor#911921687] to Actor[akka://HelloRemoteSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
^Csarawaheed#ubuntu:/opt/ifkaar/programs/akka/akkaremoting/helloremote$ sbt run
[info] Loading project definition from /opt/ifkaar/programs/akka/akkaremoting/helloremote/project
[info] Set current project to helloremote (in build file:/opt/ifkaar/programs/akka/akkaremoting/helloremote/)
[info] Running HelloRemote
Remote Actor receive messgage : The remote actor is alive
[INFO] [10/27/2014 17:24:06.136] [HelloRemoteSystem-akka.actor.default-dispatcher-2] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Actor[akka://HelloRemoteSystem/user/RemoteActor#-792263999] to Actor[akka://HelloRemoteSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
An remote akka actor path is made up of the following components:
protocol:actor system name:server address:remoting port:root path:actor path + name
So for the first example, those components end up being:
protocol = akka://
actor system name = HelloRemoteSystem
server address = 127.0.0.1
remoting port = 5150
root path: user
actor path + name = RemoteActor
All actors started by you in your custom actor code will roll up under the user root. Akka uses another root called system for system level actors. These actors fall under a separate hierarchy and supervision scheme from the custom ones that a user needs for their custom application. So user should always be part of the path to the custom RemoteActor from the example. Then, because RemoteActor is started as a top level actor (no direct supervisor, started from system as opposed to the context of another actor) with a name = "RemoteActor" it will roll up under the path /user/RemoteActor. So putting it all together, the path to use to remotely lookup that actor is the one given in the example code:
"akka://HelloRemoteSystem#127.0.0.1:5150/user/RemoteActor"
You don't need to change "user"
If you look inside the documentation of akka you see that "user" is a subdirectory of each actor system that contains all user-related/user-created actors, it's not the name of your user :)