AKKA: Confusion about programmatic remote deployment - scala

I am using akka remote deployment. I used logging to ensure if the actor has successfully deployed remotely. Here is my log info
[adaptiveCEP-akka.actor.default-dispatcher-18] [akka.tcp://adaptiveCEP#127.0.0.1:2555/remote/akka.tcp/adaptiveCEP#127.0.0.1:2554/user/disjunction/simple-2555-0.4631423946172286] hi, I am simple-2555-0.4631423946172286
[adaptiveCEP-akka.actor.default-dispatcher-18] [akka.tcp://adaptiveCEP#127.0.0.1:2555/remote/akka.tcp/adaptiveCEP#127.0.0.1:2554/user/disjunction/simple-2555-0.4631423946172286] hi, I am Actor[akka://adaptiveCEP/remote/akka.tcp/adaptiveCEP#127.0.0.1:2554/user/disjunction/simple-2555-0.4631423946172286#1386676347]
It appears as if actor simple-2555-0.4631423946172286#1386676347 is child actor of disjunction actor and both hosted on the same machine (no remote deployment of child). And the actor who is doing supervision is actor akka.tcp://adaptiveCEP#127.0.0.1:2555.
According to Top-Level Scopes for Actor Paths:
"/remote" is an artificial path below which all actors reside whose supervisors are remote actor references
Have I misunderstood something?
If required
val randomRouter = actorSystem.actorOf(Props[Master],
"disjunction")
Master.scala
val temp = context.actorOf(Props[SimpleClusterListener].withDeploy(Deploy(scope = RemoteScope(address))), "simple-" + port + "-" + Math.random())
temp ! "hi"
Reference
Create an Akka actor remotely without a new ActorSystem

No, your actor is not deployed locally, it's on the remote machine
akka.tcp://adaptiveCEP#127.0.0.1:2555/remote/akka.tcp/adaptiveCEP#127.0.0.1:2554/user/disjunction/simple-2555-0.4631423946172286] hi, I am simple-2555-0.4631423946172286
This log item shows that your actor is running on actor system "adaptiveCEP" on "127.0.0.1:2555" machine and is being supervised by "disjunction" actor in "adaptiveCEP#127.0.0.1:2554"

Related

PoisonPill not killing the Akka actor system as intended

I'm trying to get a child actor in my Scala 2.11 app (that uses Akka) to send a PoisonPill to the top-level actor (Master) and shut down the entire JVM process.
I uploaded this repo so you can reproduce what I'm seeing. Just clone it and run ./gradlew run.
But essentially we have the Driver:
object Driver extends App {
println("Starting upp the app...")
lazy val cortex = ActorSystem("cortex")
cortex.registerOnTermination {
System.exit(0)
}
val master = cortex.actorOf(Props[Master], name = "Master")
println("About to fire a StartUp message at Master...")
master ! StartUp
println("Fired! Actor system is spinning up...")
}
and then in the Child:
class Child extends Actor {
override def receive = {
case MakeItHappen =>
println("Child will make it happen!")
context.parent ! PoisonPill
}
}
When I run ./gradlew run I see the following output:
./gradlew run
Starting a Gradle Daemon (subsequent builds will be faster)
:compileJava NO-SOURCE
:compileScala UP-TO-DATE
:processResources NO-SOURCE
:classes UP-TO-DATE
:run
Starting upp the app...
About to fire a StartUp message at Master...
Fired! Actor system is spinning up...
Stage Director has received a command to start up the actor system!
Child will make it happen!
<==========---> 80% EXECUTING
> :run
However the JVM process never shuts down. I would have expected the PoisonPill to kill the Master (as well as Child), exit/shutdown the actor system, and then the System.exit(0) command (which I registered upon actor system termination) to kick in and exit the JVM process.
Can anyone tell what's going on here?
You're stopping the Master, but you're not shutting down the ActorSystem. Even though Master is a top-level actor (i.e., an actor created by system.actorOf), stopping it doesn't terminate the ActorSystem (note that Master itself has a parent, so it's not the "highest" actor in the actor hierarchy).
The normal way to terminate a ActorSystem is to call terminate. If you want the entire system to shut down if the Master stops, then you could override the Master's postStop hook and call context.system.terminate() there. Calling system.terminate() in an actor's lifecycle hook is risky, however. For example, if an exception is thrown in the actor, and the actor's supervision strategy stops it, you probably don't intend to shut down the entire system at that point. Such a design is not very resilient.
Check out this pattern for shutting down an actor system "at the right time."

Monitoring mailbox-size metric of routee actors with kamon-akka 0.6.x

I was using version 0.5.2 of kamon-akka library, without any problems, to monitor my akka actors. I then upgraded it to 0.6.3 and noticed that some statistics were not sent.
When I looked at the source code of kamon, I saw that the mailbox-size metric was not sent for routee actors running under a router. Instead, router metrics such as routingTime are being sent for the routee actors. However, I'm using them as workers and need to monitor their mailbox sizes.
Here is the part of kamon source which creates a routee monitor with RouterMetrics instead of ActorMetrics which contains the mailbox-size metric:
package akka.kamon.instrumentation
object ActorMonitor {
...
def createRouteeMonitor(cellInfo: CellInfo): ActorMonitor = {
def routerMetrics = Kamon.metrics.entity(RouterMetrics, cellInfo.entity)
if (cellInfo.isTracked)
new TrackedRoutee(cellInfo.entity, routerMetrics)
else ActorMonitors.ContextPropagationOnly
}
...
}
I'm not sure if this is a bug, but how can I solve this problem? Are there any configurations or workaround solutions to fix it?
Thank you in advance.

Does Akka clustering (2.4) use any ports by default other than 2551?

Does Akka use ports (by default) other than port 2551 for clustering?
I have a 3-node Akka cluster--each node running in a Docker using 2.4's bind-hostname/port. I have a seed running outside a Docker in some test code. I can successfully send messages to the nodes point-to-point, directly, so basic Akka messaging works fine for the Docker-ized nodes.
My Seed code looks like this:
class Seed() extends Actor {
def receive = {
case "report" =>
mediator ! DistributedPubSubMediator.SendToAll("/user/sender", ReportCommand(), false)
case r:ReportCommand => println("Report, please!")
}
}
val seed = system.actorOf(Props(new Seed()),"sender")
val mediator = DistributedPubSub(system).mediator
mediator ! DistributedPubSubMediator.Put(seed)
My worker nodes look like this:
class SenderActor(senderLike:SenderLike) extends Actor {
val mediator = DistributedPubSub(context.system).mediator
mediator ! Put(self)
def receive = {
case report:ReportCommand => println("REPORT CMD!")
}
}
When I run this and send a "report" message to the Seed, I see the Seed's "Report, please!" message, so it received its own broadcast, but the 3 workers in the Dockers don't register having received anything (no output on receive). Not sure what's wrong so I'm wondering if there is another port besides 2551 I need to EXPOSE in my Dockers for clustering?
You'll need to configure Akka using port and bind-port, since in docker the "local port" is different from the port "the outside world can reach me at".
In order to do this see this documentation page: Peer to Peer vs Client Server
And this FAQ section Why are replies not received from a remote actor?

Akka IO(Tcp) get reason of CommandFailed

I have the following example of Actor using IO(Tcp)
https://gist.github.com/flirtomania/a36c50bd5989efb69a5f
For the sake of experiment I've run it twice, so it was trying to bind to 803 port. Obviously I've got an error.
Question: How can I get the reason why "CommandFailed"? In application.conf I have enabled slf4j and debug level of logs, then I've got an error in my logs.
DEBUG akka.io.TcpListener - Bind failed for TCP channel on endpoint [localhost/127.0.0.1:803]: java.net.BindException: Address already in use: bind
But why is that only debug level? I do not want to enable all ActorSystem to log their events, I want to get the reason of CommandFailed event (like java.lang.Exception instance which I could make e.printStackTrace())
Something like:
case c # CommandFailed => val e:Exception = c.getReason()
Maybe it's not the Akka-way? How to get diagnostic info then?
Here's what you can do - find the PID that still keeps living and then kill it.
On a Mac -
lsof -i : portNumber
then
kill -9 PidNumber
I understood that you have 2 questions.
if you ran the same code simultaneously, bot actors are trying to bind to the same port (in your case, 803) which is not possible unless the bound one unbinds and closes the connection so that the other one can bind.
you can import akka.event.Logging and put val log = Logging(context.system, this) at the beginning of your actors, which will log all the activities of your actors and ...
it also shows the name of the actor,corresponding actor system and host+port (if you are using akka-cluster).
wish that helps

Why does my actorFor fail when deployed to a Akka Microkernel JAR?

Have a somewhat simple project deployed to a JAR. I am starting up a supervisor actor that confirms it is booting up by sending out the following log message:
[akka://service-kernel/user/Tracker] Starting new Tracker
However, when I go to reference the actor via actorFor locally with an sbt run, it finds it no problem. In production, I use the same .actorFor("akka://service-kernel/user/Tracker") and it throws a NullPointerException. I can confirm via the logs that in production, the Tracker has sent out its confirmation that it booted up.
Are there any issues when using a Microkernel deployed to a JAR to make actor references?
Edit
I am suspecting that both the way I reference the system and the way Akka treats the start up class are related to the issue. Since I have specified a start up class called ServiceKernel, I am performing the reference as such: ServiceKernel.system.actorFor. Will provide an answer if confirmed.
Confirmed that it was related to the startup class handling the Microkernel.
The ServiceKernel mentioned above is used in the start script to boot up the Microkernel JAR: ./start com.package.ServiceKernel. In an sbt shell, this isn't needed so the alternative class I provided works well for referencing an Actor System.
However, in a Microkernel the ServiceKernel appears to be using a different Actor System altogether, so if you reference that system (like I did) then actorFor lookups will always fail. I solved the problem by passing the system down into the boot classes into the specific class where I was making the actorFor reference and it worked. Did it like this (pseudo-code):
class ServiceKernel extends Bootable {
val system = ActorSystem("service-kernel")
def startup = {
system.actorOf(Props(new Boot(isDev, system))) ! Start
}
}
And then passing it to an HttpApi class:
class Boot(val isDev: Boolean, system: ActorSystem) extends Actor with SprayCanHttpServerApp {
def receive = {
case Start =>
// setup HTTP server
val service = system.actorOf(Props(new HttpApi(system)), "tracker-http-api")
}
}