How to persist actor state with high rate of messages - scala

I want to start using actors with heavy rate messages . actor's last state is very important
I was following the persistence example shown at here http://doc.akka.io/docs/akka/2.3.9/scala/persistence.html#event-sourcing
I tried to send a heavy load of messages
for (i <-0 to 100000){
persistentActor ! Cmd("foo"+i)
}
and using the persistAsync like this
val receiveCommand: Receive = {
case Cmd(data) =>
persistAsync(Evt(s"${data}-${numEvents}"))(updateState)
case "snap" => saveSnapshot(state)
case "print" => println(state)
}
before shutdown I added Thread.sleep(150000) to make sure that all persists . at first all seems to be ok , however re-running the app shows that some are going to dead-letter
> [INFO] [02/03/2015 15:35:18.187]
> [example-akka.actor.default-dispatcher-3]
> [akka://example/user/persistentActor-4-scala] Message
> [java.lang.String] from Actor[akka://example/deadLetters] to
> Actor[akka://example/user/persistentActor-4-scala#1206460640] was not
> delivered. [1] dead letters encountered. This logging can be turned
> off or adjusted with configuration settings 'akka.log-dead-letters'
> and 'akka.log-dead-letters-during-shutdown'. [INFO] [02/03/2015
> 15:35:18.192] [example-akka.actor.default-dispatcher-3]
> [akka://example/user/persistentActor-4-scala] Message
> [akka.persistence.Recover] from
> Actor[akka://example/user/persistentActor-4-scala#1206460640] to
> Actor[akka://example/user/persistentActor-4-scala#1206460640] was not
> delivered. [2] dead letters encountered. This logging can be turned
> off or adjusted with configuration settings 'akka.log-dead-letters'
> and 'akka.log-dead-letters-during-shutdown'.
or getting something like :
----------
[INFO] [02/03/2015 15:54:32.732] [example-akka.actor.default-dispatcher-11] [akka://example/user/persistentActor-4-scala] Message [akka.persistence.JournalProtocol$ReplayedMessage] from Actor[akka://example/deadLetters] to Actor[akka://example/user/persistentActor-4-scala#-973984210] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [02/03/2015 15:54:32.735] [example-akka.actor.default-dispatcher-3] [akka://example/user/persistentActor-4-scala] Message [akka.persistence.JournalProtocol$ReplayedMessage] from Actor[akka://example/deadLetters] to Actor[akka://example/user/persistentActor-4-scala#-973984210] was not delivered. [10] dead letters encountered, no more dead letters will be logged. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
A fatal error has been detected by the Java Runtime Environment:
SIGSEGV (0xb) at pc=0x00007fa2a3e06b6a, pid=18870, tid=140335801857792
JRE version: Java(TM) SE Runtime Environment (7.0_71-b14) (build 1.7.0_71-b14)
Java VM: Java HotSpot(TM) 64-Bit Server VM (24.71-b01 mixed mode linux-amd64 compressed oops)
Problematic frame:
V [libjvm.so+0x97bb6a] Unsafe_GetNativeByte+0xaa
Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
An error report file with more information is saved as:
/home/tadmin/projects/akka-sample-persistence-scala/hs_err_pid18870.log
If you would like to submit a bug report, please visit:
http://bugreport.sun.com/bugreport/crash.jsp
========================================================================
how can I persist a state of an actor that should handle heavy rate of messages ?

I suspect that the default database, levelDB, is the problem, and is most certainly what is core dumping. Are you by any chance writing to the db with more than one actor concurrently? In my experience, I've seen it core dump in that scenario. You could try it in shared mode, but I just plugged in a different database and the problems went away. In my case I used Cassandra.
I would try an in-memory journal plugin for akka-persistence. It is very easy to swap in. If the problems go away, then you know that levelDB is the problem. If so, go with a different database.

Related

Unable to connect to the NetBeans Distribution because of Zero sized file

I recently reinstalled Netbeans IDE on my Windows 10 PC in order to restore some unrelated configurations. When I tried checking for new plugins in order to be able to download the Sakila sample database,
I get this error.
I've tested the connection on both No Proxy and Use Proxy Settings, and both connection tests seem to end succesfully.
I have allowed Netbeans through my firewall, but this has changed nothing either.
I haven't touched my proxy configuration, so it's on default (autodetect). Switching the autodetect off doesn't change anything, either, no matter what proxy config i have on Netbeans.
Here's part of my log file that might be helpful:
Compiler: HotSpot 64-Bit Tiered Compilers
Heap memory usage: initial 32,0MB maximum 910,5MB
Non heap memory usage: initial 2,4MB maximum -1b
Garbage collector: PS Scavenge (Collections=12 Total time spent=0s)
Garbage collector: PS MarkSweep (Collections=3 Total time spent=0s)
Classes: loaded=6377 total loaded=6377 unloaded 0
INFO [org.netbeans.core.ui.warmup.DiagnosticTask]: Total memory 17.130.041.344
INFO [org.netbeans.modules.autoupdate.updateprovider.DownloadListener]: Connection content length was 0 bytes (read 0bytes), expected file size can`t be that size - likely server with file at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b is temporary down
INFO [org.netbeans.modules.autoupdate.ui.Utilities]: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
java.io.IOException: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.doCopy(DownloadListener.java:155)
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.streamOpened(DownloadListener.java:78)
at org.netbeans.modules.autoupdate.updateprovider.NetworkAccess$Task$1.run(NetworkAccess.java:111)
Caused: java.io.IOException: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.notifyException(DownloadListener.java:103)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogCache.copy(AutoupdateCatalogCache.java:246)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogCache.writeCatalogToCache(AutoupdateCatalogCache.java:99)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogProvider.refresh(AutoupdateCatalogProvider.java:154)
at org.netbeans.modules.autoupdate.services.UpdateUnitProviderImpl.refresh(UpdateUnitProviderImpl.java:180)
at org.netbeans.api.autoupdate.UpdateUnitProvider.refresh(UpdateUnitProvider.java:196)
[catch] at org.netbeans.modules.autoupdate.ui.Utilities.tryRefreshProviders(Utilities.java:433)
at org.netbeans.modules.autoupdate.ui.Utilities.doRefreshProviders(Utilities.java:411)
at org.netbeans.modules.autoupdate.ui.Utilities.presentRefreshProviders(Utilities.java:405)
at org.netbeans.modules.autoupdate.ui.UnitTab$14.run(UnitTab.java:806)
at org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:1423)
at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:2033)
It might be that the update server is down just right now; i haven't been able to test this either. But it also might be something wrong with my configurations. I'm going crazy!!1!
Something that worked for me was changing the "http:" to "https:" in the update urls.
I.E. Change "http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz"
to "https://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz"
No idea why that makes it work on my end. I'm running Linux Mint 19.1.

How to turn off WSTestClient logging client name when running tests?

When I run tests in Play 2.6.x that are using the WsTestClient like:
WsTestClient.withClient { client => ...
I see in the logs:
[info] p.a.t.WsTestClient$SingletonWSClient - createNewClient: name = ws-test-client-1
Which I believe is coming from this line
Why is it logging as Info if it's set at log-level warn? Can I silence the noise somehow?

akka sharding proxy cannot contact with coordinator

I'm building an app with two kind of nodes (front and back) on akka 2.5.1, I'm using akka sharding for load and data distribution across the back nodes. The front node uses a shard proxy to send messages to the back. Shards initialisation is as follow:
val renditionManager: ActorRef =
if(nodeRole == "back")
clusterSharding.start(
typeName = "Rendition",
entityProps = Manger.props,
settings = ClusterShardingSettings(system),
extractEntityId = Manager.idExtractor,
extractShardId = Manager.shardResolver)
else
clusterSharding.startProxy(
typeName = "Rendition",
role = None,
extractEntityId = Manager.idExtractor,
extractShardId = Manager.shardResolver)
And i got some dead letters logs (omit most of the entries for brevity):
[info] [INFO] [06/02/2017 11:39:13.770] [wws-renditions-akka.actor.default-dispatcher-26] [akka://wws-renditions/system/sharding/RenditionCoordinator/singleton/coordinator] Message [akka.cluster.sharding.ShardCoordinator$Internal$Register] from Actor[akka.tcp://wws-renditions#127.0.0.1:2552/system/sharding/Rendition#1607279929] to Actor[akka://wws-renditions/system/sharding/RenditionCoordinator/singleton/coordinator] was not delivered. [8] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[info] [INFO] [06/02/2017 11:39:15.607] [wws-renditions-akka.actor.default-dispatcher-21] [akka://wws-renditions/system/sharding/RenditionCoordinator/singleton/coordinator] Message [akka.cluster.sharding.ShardCoordinator$Internal$RegisterProxy] from Actor[akka://wws-renditions/system/sharding/Rendition#-267271026] to Actor[akka://wws-renditions/system/sharding/RenditionCoordinator/singleton/coordinator] was not delivered. [9] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[info] [INFO] [06/02/2017 11:39:15.762] [wws-renditions-akka.actor.default-dispatcher-21] [akka://wws-renditions/system/sharding/replicator] Message [akka.cluster.ddata.Replicator$Internal$Status] from Actor[akka.tcp://wws-renditions#127.0.0.1:2552/system/sharding/replicator#-126233532] to Actor[akka://wws-renditions/system/sharding/replicator] was not delivered. [10] dead letters encountered, no more dead letters will be logged. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
and if i tried to use the proxy it fails to deliver and shows:
[info] [WARN] [06/02/2017 12:12:28.047] [wws-renditions-akka.actor.default-dispatcher-15] [akka.tcp://wws-renditions#127.0.0.1:2551/system/sharding/Rendition] Retry request for shard [51] homes from coordinator at [Actor[akka.tcp://wws-renditions#127.0.0.1:2552/system/sharding/RenditionCoordinator/singleton/coordinator#-1550443839]]. [1] buffered messages.
In the other hand, if I start a non-proxy shard in both nodes (front and back) it works properly.
Any advise? Thanks.
UPDATE
I finally figure it out why it was trying to connect to shards in wrong nodes. If it is only intended to start a shard in a single kind of node it is needed to add the following configuration
akka.cluster.sharding {
role = "yourRole"
}
This way, akka sharding only will lookup on nodes tagged with role "yourRole"
Proxy is still not able to connect with shard coordinator and deliver messages to shards and got the following log trace:
[WARN] [06/06/2017 12:09:25.754] [cluster-nodes-akka.actor.default-dispatcher-16] [akka.tcp://cluster-nodes#127.0.0.1:2551/system/sharding/Manager] Retry request for shard [52] homes from coordinator at [Actor[akka.tcp://cluster-nodes#127.0.0.1:2552/system/sharding/ManagerCoordinator/singleton/coordinator#-2111378619]]. [1] buffered messages.
so help would be nice :)
Got it!
I made 2 mistakes, for the first one check the UPDATE section in the main question.
The second is due to, for some reason, it is needed 2 shard regions up within the cluster (For testing purposes I was using only one), no clue if this is stated somewhere in the Akka docs.

How can I shutdown ReactiveMongo 0.11.14 without a bunch of dead letter messages?

I'm using ReactiveMongo 0.11.14 with Play 2.5.8. When I try to shut down reactivemongo, I get a bunch of akka messages about dead letters. The lost messages all seem to relate directly to closing down. (They are all of class ChannelClosed or ChannelDisconnected.)
How do I properly shut down reactivemongo to avoid these annoying messages?
Here's the way I'm trying to do it now:
for (connection <- reactiveMongoApi.driver.connections) {
//close them sequentially just to be extra safe
Await.ready(connection.askClose()(5.seconds), 5.seconds)
}
reactiveMongoApi.driver.close(5.seconds)
Here's two examples of the dead-letter messages I get:
[INFO] [10/06/2016 15:14:07.387] [reactivemongo-akka.actor.default-dispatcher-7] [akka://reactivemongo/user/Connection-2] Message [reactivemongo.core.actors.ChannelClosed] from Actor[akka://reactivemongo/deadLetters] to Actor[akka://reactivemongo/user/Connection-2#1711870153] was not delivered. [9] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [10/06/2016 15:14:07.387] [reactivemongo-akka.actor.default-dispatcher-7] [akka://reactivemongo/user/Connection-2] Message [reactivemongo.core.actors.ChannelDisconnected] from Actor[akka://reactivemongo/deadLetters] to Actor[akka://reactivemongo/user/Connection-2#1711870153] was not delivered. [10] dead letters encountered, no more dead letters will be logged. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
I've tried closing the connections after the driver, skipping connection close() altogether, and initiating a shutdown via driver.system.terminate() instead of driver.close().

how to give the accurate path to find the remote actor

I am following this tutorial
http://alvinalexander.com/scala/simple-akka-actors-remote-example
I am following it as it is but my program is not running it gives me errors and i am confused about this line :
val remote = context.actorFor("akka://HelloRemoteSystem#127.0.0.1:5150/user/RemoteActor")
What do i have to write in place of "user" ? When I write the full path to the end for example:
val remote = context.actorFor("akka://HelloRemoteSystem#127.0.0.1:5150/sw/opt/programs/akka/akkaremoting/RemoteActor")
and run both hellolocal and helloremote both give me errors about looking up the actor for that address.
and if i write the code as it is it gives me error
helloremote erros :
[INFO] [10/27/2014 16:06:23.736] [HelloRemoteSystem-akka.actor.default-dispatcher-2] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Actor[akka://HelloRemoteSystem/user/RemoteActor#911921687] to Actor[akka://HelloRemoteSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
^Csarawaheed#ubuntu:/opt/ifkaar/programs/akka/akkaremoting/helloremote$ sbt run
[info] Loading project definition from /opt/ifkaar/programs/akka/akkaremoting/helloremote/project
[info] Set current project to helloremote (in build file:/opt/ifkaar/programs/akka/akkaremoting/helloremote/)
[info] Running HelloRemote
Remote Actor receive messgage : The remote actor is alive
[INFO] [10/27/2014 17:24:06.136] [HelloRemoteSystem-akka.actor.default-dispatcher-2] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Actor[akka://HelloRemoteSystem/user/RemoteActor#-792263999] to Actor[akka://HelloRemoteSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
hellolocal erros :
[INFO] [10/27/2014 16:06:23.736] [HelloRemoteSystem-akka.actor.default-dispatcher-2] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Actor[akka://HelloRemoteSystem/user/RemoteActor#911921687] to Actor[akka://HelloRemoteSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
^Csarawaheed#ubuntu:/opt/ifkaar/programs/akka/akkaremoting/helloremote$ sbt run
[info] Loading project definition from /opt/ifkaar/programs/akka/akkaremoting/helloremote/project
[info] Set current project to helloremote (in build file:/opt/ifkaar/programs/akka/akkaremoting/helloremote/)
[info] Running HelloRemote
Remote Actor receive messgage : The remote actor is alive
[INFO] [10/27/2014 17:24:06.136] [HelloRemoteSystem-akka.actor.default-dispatcher-2] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Actor[akka://HelloRemoteSystem/user/RemoteActor#-792263999] to Actor[akka://HelloRemoteSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
An remote akka actor path is made up of the following components:
protocol:actor system name:server address:remoting port:root path:actor path + name
So for the first example, those components end up being:
protocol = akka://
actor system name = HelloRemoteSystem
server address = 127.0.0.1
remoting port = 5150
root path: user
actor path + name = RemoteActor
All actors started by you in your custom actor code will roll up under the user root. Akka uses another root called system for system level actors. These actors fall under a separate hierarchy and supervision scheme from the custom ones that a user needs for their custom application. So user should always be part of the path to the custom RemoteActor from the example. Then, because RemoteActor is started as a top level actor (no direct supervisor, started from system as opposed to the context of another actor) with a name = "RemoteActor" it will roll up under the path /user/RemoteActor. So putting it all together, the path to use to remotely lookup that actor is the one given in the example code:
"akka://HelloRemoteSystem#127.0.0.1:5150/user/RemoteActor"
You don't need to change "user"
If you look inside the documentation of akka you see that "user" is a subdirectory of each actor system that contains all user-related/user-created actors, it's not the name of your user :)