Cannot configure Bounded Mailbox for Routees of RoundRobinPool - scala

When I try to set bounded-mailbox for routees of a pool (RoundRobinPool) in configuration file, somehow Akka ignores the mailbox configuration.
Here is the configuration I use:
bounded-mailbox {
mailbox-type = "akka.dispatch.BoundedMailbox"
mailbox-capacity = 1
mailbox-push-timeout-time = 1s
}
akka.actor.deployment {
/singletestactor {
mailbox = bounded-mailbox
}
/groupedtestactor {
mailbox = bounded-mailbox
router = round-robin-pool
nr-of-instances = 5
}
}
And here is the test code:
object MailboxTest {
def main(args: Array[String]): Unit = {
val actorSystem = ActorSystem()
val singleTestActor = actorSystem.actorOf(Props[TestActor], "singletestactor")
for (i <- 1 to 10) {
singleTestActor ! Hello(i)
}
val groupedTestActor = actorSystem.actorOf(Props[TestActor].withRouter(FromConfig, "groupedtestactor")
for (i <- 1 to 1000) {
groupedTestActor ! Hello(i)
}
}
}
class TestActor extends Actor {
def receive = {
case Hello(i) => {
println(s"Hello($i) - begin!")
Thread.sleep(10000)
println(s"Hello($i) - end!")
}
}
}
case class Hello(i: Int)
Am I doing something wrong, or there is no way to define mailbox for routees?

You need to add mailbox.requirements configuration in application.conf;
akka.actor.mailbox.requirements {
"akka.dispatch.BoundedMessageQueueSemantics" = bounded-mailbox
}
Then need to extend TestActor like this;
class TestActor extends Actor with RequiresMessageQueue[BoundedMessageQueueSemantics]
See the documentation here for mailbox configuration.
I also created round robin pool like this;
val groupedTestActor = actorSystem.actorOf(FromConfig.props(Props[TestActor]), "groupedtestactor")

Related

Play + Akka - Join the cluster and ask actor on another ActorSystem

I am able to make Play app join the existing Akka cluster and then make ask call to actor running on another ActorSystem and get results back. But I am having trouble with couple of things -
I see below in logs when play tries to join the cluster. I suspect that Play is starting its own akka cluster? I am really not sure what it means.
Could not register Cluster JMX MBean with name=akka:type=Cluster as it is already registered. If you are running multiple clust
ers in the same JVM, set 'akka.cluster.jmx.multi-mbeans-in-same-jvm = on' in config`
Right now I m re-initializing the actorsystem every time when the request comes to Controller which I know is not right way do it. I am new to Scala, Akka, Play thing and having difficulty figuring out how to make it Singleton service and inject into my controller.
So far I have got this -
class DataRouter #Inject()(controller: DataController) extends SimpleRouter {
val prefix = "/v1/data"
override def routes: Routes = {
case GET(p"/ip/$datatype") =>
controller.get(datatype)
case POST(p"/ip/$datatype") =>
controller.process
}
}
case class RangeInput(start: String, end: String)
object RangeInput {
implicit val implicitWrites = new Writes[RangeInput] {
def writes(range: RangeInput): JsValue = {
Json.obj(
"start" -> range.start,
"end" -> range.end
)
}
}
}
#Singleton
class DataController #Inject()(cc: ControllerComponents)(implicit exec: ExecutionContext) extends AbstractController(cc) {
private val logger = Logger("play")
implicit val timeout: Timeout = 115.seconds
private val form: Form[RangeInput] = {
import play.api.data.Forms._
Form(
mapping(
"start" -> nonEmptyText,
"end" -> text
)(RangeInput.apply)(RangeInput.unapply)
)
}
def get(datatype: String): Action[AnyContent] = Action.async { implicit request =>
logger.info(s"show: datatype = $datatype")
logger.trace(s"show: datatype = $datatype")
//val r: Future[Result] = Future.successful(Ok("hello " + datatype ))
val config = ConfigFactory.parseString("akka.cluster.roles = [gateway]").
withFallback(ConfigFactory.load())
implicit val system: ActorSystem = ActorSystem(SharedConstants.Actor_System_Name, config)
implicit val materializer: ActorMaterializer = ActorMaterializer()
implicit val executionContext = system.dispatcher
val ipData = system.actorOf(
ClusterRouterGroup(RandomGroup(Nil), ClusterRouterGroupSettings(
totalInstances = 100, routeesPaths = List("/user/getipdata"),
allowLocalRoutees = false, useRoles = Set("static"))).props())
val res: Future[String] = (ipData ? datatype).mapTo[String]
//val res: Future[List[Map[String, String]]] = (ipData ? datatype).mapTo[List[Map[String,String]]]
val futureResult: Future[Result] = res.map { list =>
Ok(Json.toJson(list))
}
futureResult
}
def process: Action[AnyContent] = Action.async { implicit request =>
logger.trace("process: ")
processJsonPost()
}
private def processJsonPost[A]()(implicit request: Request[A]): Future[Result] = {
logger.debug(request.toString())
def failure(badForm: Form[RangeInput]) = {
Future.successful(BadRequest("Test"))
}
def success(input: RangeInput) = {
val r: Future[Result] = Future.successful(Ok("hello " + Json.toJson(input)))
r
}
form.bindFromRequest().fold(failure, success)
}
}
akka {
log-dead-letters = off
log-dead-letters-during-shutdown = off
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = ${myhost}
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://MyCluster#localhost:2541"
]
} seed-nodes = ${?SEEDNODE}
}
Answers
Refer to this URL. https://www.playframework.com/documentation/2.6.x/ScalaAkka#Built-in-actor-system-name has details about configuring the actor system name.
You should not initialize actor system on every request, use Play injected actor system in the Application class, if you wish to customize the Actor system, you should do it through modifying the AKKA configuration. For that,
you should create your own ApplicationLoader extending GuiceApplicationLoader and override the builder method to have your own AKKA configuration. Rest of the things taken care by Play like injecting this actor system in Application for you.
Refer to below URL
https://www.playframework.com/documentation/2.6.x/ScalaDependencyInjection#Advanced:-Extending-the-GuiceApplicationLoader

Akka ClusterSingletonProxy to a remote deployed singleton

I'm trying to send a message to a singleton actor that was deployed on a remote node through another actor.
This is the manager that is waiting for a memberUp event, then deploys Worker actor on that node and then sends the singleton a message:
object Manager extends App {
val sys = ActorSystem("mySys", ConfigFactory.load("application").getConfig("manager"))
sys.actorOf(Props[Manager], "manager")
}
class Manager extends Actor with ActorLogging {
override def receive: Receive = {
case MemberUp(member) if member.address != Cluster(context.system).selfAddress =>
context.system.actorOf(ClusterSingletonManager.props(
singletonProps = Props(classOf[Worker]),
singletonName = "worker",
terminationMessage = End,
role = Some("worker")).withDeploy(Deploy(scope = RemoteScope(member.address))))
context.actorOf(ClusterSingletonProxy.props(
singletonPath = s"/user/singleton/worker",
role = Some(s"worker")), "worker") ! "hello"
}
override def preStart(): Unit = {
Cluster(context.system).subscribe(self,classOf[MemberUp])
}
}
This is the worker:
object Worker extends App{
ActorSystem("mySys", ConfigFactory.load("application").getConfig("worker"))
}
class Worker extends Actor with ActorLogging {
override def receive: Receive = {
case msg =>
println(s"GOT MSG : $msg from : ${sender().path.name}")
}
}
And the application.conf:
manager {
akka {
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
cluster {
auto-down-unreachable-after = 20s
seed-nodes = [
"akka.tcp://mySys#127.0.0.1:2552"
]
roles.1 = "manager"
}
remote.netty.tcp.port = 2552
}
}
worker {
akka {
cluster {
auto-down-unreachable-after = 20s
seed-nodes = [
"akka.tcp://mySys#127.0.0.1:2552"
]
roles.1 = "worker"
}
remote.netty.tcp.port = 2554
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
}
}
The worker is initialized (and I can see in the logs the state change [Start -> Oldest] message) but the message sent from the manager never arrives to the worker. It used to work fine when I was deploying the singleton on the remote node, but now I want the manager the deploy it.
I also tried to deploy it as the child of the manager (using context instead of context.system) and changed the singleton path to user/manager/singleton/worker, but it didn't work.
I'm using Akka 2.3.11
Edit:
sbt file:
name := "MyProject"
version := "1.0"
scalaVersion := "2.10.5"
libraryDependencies +=
"com.typesafe.akka" %% "akka-actor" % "2.3.11",
"com.typesafe.akka" %% "akka-cluster" % "2.3.11",
"joda-time" % "joda-time" % "2.0",
"com.typesafe.akka" %% "akka-contrib" % "2.3.11"
So I played around a bit with different options of creating ClusterSingletonManagers and I think deploying them remotely is breaking something within the singleton pattern. I have gathered a few indicators for this:
Since it is a remote deployment the path of the ClusterSingletonManager on the worker node is /remote/akka.tcp/mySys#127.0.0.1:2552/user/worker. I don't think the library can / will handle this, since it expects /user/worker
When trying to send the message from the master node using ClusterSingletonProxy log in DEBUG mode states No singleton available, stashing message hello worker and Trying to identify singleton at akka.tcp://mySys#127.0.0.1:2552/user/worker/singleton (which fails and retries) -> It is looking for the singleton on the wrong node, since no manager is available and it is apparently not aware that the singleton is on the worker node.
When creating the ClusterSingletonManager on the worker node directly everything works as expected.
You also had an issue with your naming of the manager. Your singletonName is worker and your manager itself (the actor) does not have any name. When you create the proxy you use the path /user/singleton/worker, but the path should be as follows: /user/{actorName}/{singletonName}. So in my code I used worker as the actorName and singleton as the singletonName.
So here's my working code:
object Manager extends App {
val sys = ActorSystem("mySys", ConfigFactory.load("application").getConfig("manager"))
sys.actorOf(Props[Manager], "manager")
}
class Manager extends Actor with ActorLogging {
override def receive: Receive = {
case MemberUp(member) if member.address != Cluster(context.system).selfAddress =>
context.actorOf(ClusterSingletonProxy.props(
singletonPath = s"/user/worker/singleton",
role = Some("worker")), name = "workerProxy") ! "hello worker"
}
override def preStart(): Unit = {
Cluster(context.system).subscribe(self,classOf[MemberUp])
}
}
object Worker extends App{
val sys = ActorSystem("mySys", ConfigFactory.load("application").getConfig("worker"))
sys.actorOf(ClusterSingletonManager.props(
singletonProps = Props(classOf[Worker]),
singletonName = "singleton",
terminationMessage = PoisonPill,
role = Some("worker")), name = "worker")
}
class Worker extends Actor with ActorLogging {
override def receive: Receive = {
case msg =>
println(s"GOT MSG : $msg from : ${sender().path.name}")
}
}
application.conf and build.sbt stayed the same.
EDIT
Got it to work with by referencing the ClusterSingletonProxy with the actual path on the worker node (calculating in that it is a network path). I am not sure if I would recommend this, since I am still not sure, if that library is designed to be able to do this, but it works at least in this minimal example:
object Manager extends App {
val sys = ActorSystem("mySys", ConfigFactory.load("application").getConfig("manager"))
sys.actorOf(Props[Manager], "manager")
}
class Manager extends Actor with ActorLogging {
override def receive: Receive = {
case MemberUp(member) if member.address != Cluster(context.system).selfAddress =>
val ref = context.system.actorOf(ClusterSingletonManager.props(
singletonProps = Props(classOf[Worker]),
singletonName = "singleton",
terminationMessage = PoisonPill,
role = Some("worker")).withDeploy(Deploy(scope = RemoteScope(member.address))), name = "worker")
context.actorOf(ClusterSingletonProxy.props(
singletonPath = s"${ref.path.toStringWithoutAddress}/singleton", // /remote/akka.tcp/mySys#127.0.0.1:2552/user/worker/singleton
role = Some("worker")), name = "workerProxy") ! "hello worker"
}
override def preStart(): Unit = {
Cluster(context.system).subscribe(self,classOf[MemberUp])
}
}
object Worker extends App{
val sys = ActorSystem("mySys", ConfigFactory.load("application").getConfig("worker"))
}
class Worker extends Actor with ActorLogging {
override def receive: Receive = {
case msg =>
println(s"GOT MSG : $msg from : ${sender().path.name}")
}
}

Akka actor does not receive message with DistributedPubSub

I try to make akka cluster with distributed messages work, but I'm stuck. My actor is properly started and subscribed to topic but no messages are received. Here is the code
import akka.actor.{Actor, ActorSystem, Props}
import akka.cluster.client.ClusterClient.Publish
import akka.cluster.pubsub.DistributedPubSub
import akka.cluster.pubsub.DistributedPubSubMediator.{Subscribe, SubscribeAck}
case object DistributedMessage
object ClusterExample extends App {
val system = ActorSystem("ClusterSystem")
val actor = system.actorOf(Props(classOf[ClusterExample]), "clusterExample")
}
class ClusterExample extends Actor {
private val mediator = DistributedPubSub(context.system).mediator
mediator ! Subscribe("content", self)
override def receive = {
case SubscribeAck(Subscribe("content", None, `self`)) =>
(1 to 100) foreach (_ => {
mediator ! Publish("content", msg = DistributedMessage)
})
case DistributedMessage => println("received message from queue!")
}
}
And here is configuration:
akka {
log-dead-letters = 0
log-dead-letters-during-shutdown = on
actor {
provider = "akka.cluster.ClusterActorRefProvider"
enable-additional-serialization-bindings = on
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 2552
bind-hostname = "0.0.0.0"
bind-port = 2552
}
}
extensions = ["akka.cluster.pubsub.DistributedPubSub"]
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2552"
]
}
}
"received message from queue" is actually never printed
Silly mistake. So, the problem was invalid import. This:
import akka.cluster.client.ClusterClient.Publish
should be replaced by:
import akka.cluster.pubsub.DistributedPubSubMediator.Publish

Akka-Cluster-Sharding: local ShardRegion(system).shardRegion(_)

I have a master Actor responsible for initializing some worker actors (there are two types of worker actors, namely, ParamServer actor and DataShard actor). For example, if I initiated 20 datashard actors via ClusterSharding(system).start(_,_,_,_,_) and after that I want to send some message to all datashard actors (say case object ReadyToProcess). I read that I can send messages to entities in Akka Cluster Shard via local ShardRegion(system).shardRegion(_). Is local shardRegion(_) will send to all datashards or just one. How can I send msgs to all datashard actors?
The master class given be:
class Master(ports: Seq[String],
dataSet: Seq[Example],
dataPerReplica: Int,
layerDimensions: Seq[Int],
activation: ActivationFunction,
activationFunctionDer: ActivationFunction,
learningRate: Double) extends Actor with ActorLogging {
val dataShards = dataSet.grouped(dataPerReplica).toSeq
val numLayers = layerDimensions.size
var numShardsFinished = 0
ports foreach { port =>
val config = ConfigFactory.parseString("akka.remote.netty.tcp.port=" + port).withFallback(ConfigFactory.load())
val clusterSystem = ActorSystem("ClusterSystem", config)
val paramServerRegions: Array[ActorRef] = new Array[ActorRef](numLayers - 1)
for (i <- 0 to numLayers - 2) {
paramServerRegions(i) = ClusterSharding(clusterSystem).start(
typeName = ParamServer.shardName,
entityProps = ParamServer.props(i, dataShards.size, learningRate, NeuralNetworkOps.randomMatrix(layerDimensions(i + 1), layerDimensions(i) + 1)),
settings = ClusterShardingSettings(clusterSystem),
extractEntityId = ParamServer.extractEntityId,
extractShardId = ParamServer.extractShardId
)
}
//create actors for each data shard/replica. Each replica needs to know about all parameter shards because they will
//be reading from them and updating them
val dataShardRegions: Array[ActorRef] = new Array[ActorRef](dataShards.size)
for (i <- 0 to dataShards.size) {
dataShardRegions(i) = ClusterSharding(clusterSystem).start(
typeName = DataShard.shardName,
entityProps = DataShard.props(i, clusterSystem, dataShards(i), activation, activationFunctionDer, paramServerRegions),
settings = ClusterShardingSettings(clusterSystem),
extractEntityId = ParamServer.extractEntityId,
extractShardId = ParamServer.extractShardId
)
}
}
def receive: Receive = {
case Start => {
val shardRegionSender = ClusterSharding(context.system).shardRegion(DataShard.shardName)
println("Tomosha boshlandi")
shardRegionSender ! ReadyToProcess
}
case ShardDone(id) => {
numShardsFinished+=1
log.info("")
if (numShardsFinished == dataShards.size) {
context.parent ! JobDone
context.stop(self)
}
}
}
}

Akka actor infinite loop

I'm trying to write a simple matrix multiplication program with concurrent processing using Scala and Akka actors. I've not even written 10% of the code and I'm running into trouble. I created two actors - master and worker. I'm trying to communicate between them but its runs into an infinite loop. Any suggestions are really appreciated. As you can see, the code below does nothing, it prints 2 10X10 matrices in the master, after that the worker is called. But the worker's workDone message never comes back to the master. I also suspect this has to do something with a warning I'm getting:
patterns after a variable pattern cannot match (inside receive of master for case "masterSend")
import akka.actor.{ActorRef, Actor, ActorSystem, Props}
import scala.Array._
import scala.util.Random
case object masterSend
case object workSend
case object workDone
object MatrixMultiply {
val usage = """
Usage: MainStart <matrix-dimension> <high-value>
"""
def main(args: Array[String]) {
if (args.length != 2) {
println(usage)
System.exit(1)
}
val Dim = args(0).toInt
val Max = args(1).toInt
val system = ActorSystem("ComputeSystem")
val worker = system.actorOf(Props[Worker], name = "worker")
val master = system.actorOf(Props(new Master(Dim, Max, worker)), name = "master")
master ! masterSend
}
class Master(Dim: Int, Max: Int, worker : ActorRef) extends Actor {
def receive = {
case masterSend =>
val r = new Random(34636)
val matrixA = ofDim[Int](Dim,Dim)
val matrixB = ofDim[Int](Dim,Dim)
println("Matrix A: ")
for (i <- 0 to Dim - 1) {
for (j <- 0 to Dim - 1) {
matrixA(i)(j) = r.nextInt(Max)
print(matrixA(i)(j) + " ")
}
println()
}
r.setSeed(23535)
println("Matrix B: ")
for (i <- 0 to Dim - 1) {
for (j <- 0 to Dim - 1) {
matrixB(i)(j) = r.nextInt(Max)
print(matrixB(i)(j) + " ")
}
println()
}
worker ! workSend
case workDone =>
println("Work was done!!")
context.system.shutdown()
}
}
class Worker extends Actor {
def receive = {
case workSend =>
println("Work Done")
sender ! workDone
}
}
}
The problem is with pattern matching on objects you've created. It's matching inproperly. Do not bother yourself with objects. Use strings for example:
object A {
val masterSend = "masterSend"
val workSend = "workSend"
val workDone = "workDone"
}
object MatrixMultiply {
val usage = """
Usage: MainStart <matrix-dimension> <high-value>
"""
def main(args: Array[String]) {
val Dim = 3
val Max = 2
val system = ActorSystem("ComputeSystem")
val worker = system.actorOf(Props[Worker], name = "worker")
val master = system.actorOf(Props(new Master(Dim, Max, worker)), name = "master")
master ! A.masterSend
}
class Master(Dim: Int, Max: Int, worker : ActorRef) extends Actor {
def receive = {
case A.masterSend =>
println("Master sent")
worker ! A.workSend
case A.workDone =>
println("Work was done!!")
context.system.shutdown()
}
}
class Worker extends Actor {
def receive = {
case A.workSend =>
println("Work Done")
sender ! A.workDone
}
}
}
You've named your object from lower case letter.
object messageSend
But pattern matching consider it not as an object but as a some new variable instead.
case messageSend => messageSend - is a variable
You'd be able to write anything here case magicBall => will also compile.