Log after actor system has been shutdown - scala

I am using log method from with ActorLogging to make logs. I would like to make a few logs after the system is shut down, but that is not working, as I would assume it uses system for logging. What I would like to do looks like that:
logger.info("Shutting down actor system.")
context.system.shutdown()
context.system.registerOnTermination {
logger.info("Actor System terminated, stopping loggers and exiting.")
loggerContext.stop()
}
Are there any workarounds to this problem?
Thanks!

You can use just slf4j (backed for instance by logback) directly as described here.

Related

Logging in Eco.MVC.EcoController

In my MDriven MVC application I'm logging Trace messages into a log file. It seems that the class Eco.MVC.EcoController uses the Trace to log following events:
EcoController.EnsureEcoSpace: HomeController
EcoController.EnsureEcoSpace: CreateEcoSpace
EcoController.ReleaseEcoSpace: Disposing EcoSpace
OnResultExecuted (EcoController out of scope).
Is it possible to switch this logging off?
Oops - no they were not optional. Checking in fix for this now.
They will be default off and turned on with:
EcoTraceCategories.WebDebugPrint = true;

Monitoring mailbox-size metric of routee actors with kamon-akka 0.6.x

I was using version 0.5.2 of kamon-akka library, without any problems, to monitor my akka actors. I then upgraded it to 0.6.3 and noticed that some statistics were not sent.
When I looked at the source code of kamon, I saw that the mailbox-size metric was not sent for routee actors running under a router. Instead, router metrics such as routingTime are being sent for the routee actors. However, I'm using them as workers and need to monitor their mailbox sizes.
Here is the part of kamon source which creates a routee monitor with RouterMetrics instead of ActorMetrics which contains the mailbox-size metric:
package akka.kamon.instrumentation
object ActorMonitor {
...
def createRouteeMonitor(cellInfo: CellInfo): ActorMonitor = {
def routerMetrics = Kamon.metrics.entity(RouterMetrics, cellInfo.entity)
if (cellInfo.isTracked)
new TrackedRoutee(cellInfo.entity, routerMetrics)
else ActorMonitors.ContextPropagationOnly
}
...
}
I'm not sure if this is a bug, but how can I solve this problem? Are there any configurations or workaround solutions to fix it?
Thank you in advance.

Akka IO(Tcp) get reason of CommandFailed

I have the following example of Actor using IO(Tcp)
https://gist.github.com/flirtomania/a36c50bd5989efb69a5f
For the sake of experiment I've run it twice, so it was trying to bind to 803 port. Obviously I've got an error.
Question: How can I get the reason why "CommandFailed"? In application.conf I have enabled slf4j and debug level of logs, then I've got an error in my logs.
DEBUG akka.io.TcpListener - Bind failed for TCP channel on endpoint [localhost/127.0.0.1:803]: java.net.BindException: Address already in use: bind
But why is that only debug level? I do not want to enable all ActorSystem to log their events, I want to get the reason of CommandFailed event (like java.lang.Exception instance which I could make e.printStackTrace())
Something like:
case c # CommandFailed => val e:Exception = c.getReason()
Maybe it's not the Akka-way? How to get diagnostic info then?
Here's what you can do - find the PID that still keeps living and then kill it.
On a Mac -
lsof -i : portNumber
then
kill -9 PidNumber
I understood that you have 2 questions.
if you ran the same code simultaneously, bot actors are trying to bind to the same port (in your case, 803) which is not possible unless the bound one unbinds and closes the connection so that the other one can bind.
you can import akka.event.Logging and put val log = Logging(context.system, this) at the beginning of your actors, which will log all the activities of your actors and ...
it also shows the name of the actor,corresponding actor system and host+port (if you are using akka-cluster).
wish that helps

Handling connection failures in apache-camel

I am writing an apache-camel RabbitMQ consumer. I would like to react somehow to connection problems (i.e. try to reconnect). Is it possible to configure apache-camel to automatically reconnect?
If not, how can I find out that a connection to the queue was interrupted? I've done the following test:
start the queue (and some producer)
start my consumer (it was getting messages as expected)
stop the queue (the messages stopped arriving, as expected, but no exception was thrown)
start the queue (no new messages were received)
I am using camel in Scala (via akka-camel), but a Java solution would be probably also OK
You can pass in the flag automaticRecoveryEnabled=true to the URI, Camel will reconnect if the connection is lost.
For automatic RabbitMQ resource recovery (Connections/Channels/Consumers/Queues/Exchanages/Bindings) when failures occur, check out Lyra (which I authored). Example usage:
Config config = new Config()
.withRecoveryPolicy(new RecoveryPolicy()
.withMaxAttempts(20)
.withInterval(Duration.seconds(1))
.withMaxDuration(Duration.minutes(5)));
ConnectionOptions options = new ConnectionOptions().withHost("localhost");
Connection connection = Connections.create(options, config);
The rest of the API is just the amqp-client API, except your resources are automatically recovered when failures occur.
I'm not sure about camel-rabbitmq specifically, but hopefully there's a way you can swap in your own resource creation via Lyra.
Current camel-rabbitmq just create a connection and the channel when the consumer or producer is started. So it don't have a chance to catch the connection exception :(.

Why does my actorFor fail when deployed to a Akka Microkernel JAR?

Have a somewhat simple project deployed to a JAR. I am starting up a supervisor actor that confirms it is booting up by sending out the following log message:
[akka://service-kernel/user/Tracker] Starting new Tracker
However, when I go to reference the actor via actorFor locally with an sbt run, it finds it no problem. In production, I use the same .actorFor("akka://service-kernel/user/Tracker") and it throws a NullPointerException. I can confirm via the logs that in production, the Tracker has sent out its confirmation that it booted up.
Are there any issues when using a Microkernel deployed to a JAR to make actor references?
Edit
I am suspecting that both the way I reference the system and the way Akka treats the start up class are related to the issue. Since I have specified a start up class called ServiceKernel, I am performing the reference as such: ServiceKernel.system.actorFor. Will provide an answer if confirmed.
Confirmed that it was related to the startup class handling the Microkernel.
The ServiceKernel mentioned above is used in the start script to boot up the Microkernel JAR: ./start com.package.ServiceKernel. In an sbt shell, this isn't needed so the alternative class I provided works well for referencing an Actor System.
However, in a Microkernel the ServiceKernel appears to be using a different Actor System altogether, so if you reference that system (like I did) then actorFor lookups will always fail. I solved the problem by passing the system down into the boot classes into the specific class where I was making the actorFor reference and it worked. Did it like this (pseudo-code):
class ServiceKernel extends Bootable {
val system = ActorSystem("service-kernel")
def startup = {
system.actorOf(Props(new Boot(isDev, system))) ! Start
}
}
And then passing it to an HttpApi class:
class Boot(val isDev: Boolean, system: ActorSystem) extends Actor with SprayCanHttpServerApp {
def receive = {
case Start =>
// setup HTTP server
val service = system.actorOf(Props(new HttpApi(system)), "tracker-http-api")
}
}