Play Framework 2 -- custom loggers in production? - scala

I have a class with a custom logger. Here's a trivial example:
package models
import play.Logger
object AModel {
val log = Logger.of("amodel")
def aMethod() {
if (! log.isInfoEnabled) log.error("Can't log info...")
log.info("Logging aMethod in AModel")
}
}
and then we'll enable this logger in application.conf:
logger.amodel=DEBUG
and in development (Play console, use run) this logger does indeed log. But in production, once we hit the message
[info] play - Application started (Prod)
loggers defined like the above logger fail to log any further and instead we go through the error branch. It seems their log level has been changed to ERROR.
Is there anyway to correct this undesirable state of affairs? Is there special configuration for production logs?
edit
Play's handling of logs in production is a source of difficulty to more than a few people... https://github.com/playframework/playframework/issues/1186
For some reason it ships its own logger.xml which overrides application.conf.

Related

Setting DNS lookup's TimeToLive in Scala Play

I am trying to set the TimeToLive setting for DNS Lookup in my Scala-Play application. I use Play 2.5.9 and Scala 2.11.8 and follow the AWS guide. I tried the following ways:
in application.conf
// Set DNS lookup time-to-live to one minute
networkaddress.cache.ttl=1
networkaddress.cache.negative.ttl=1
in AppModule or EagerSingleton (the code would be similar)
class AppModule() extends AbstractModule {
Security.setProperty("networkaddress.cache.ttl", "1")
Security.setProperty("networkaddress.cache.negative.ttl", "1")
...
}
passing as environment variable:
sbt -Dsun.net.inetaddr.ttl=1 clean run
I have the following piece of test code in the application:
for (i <- 1 to 25) {
System.out.println(java.net.InetAddress.getByName("google.com").getHostAddress())
Thread.sleep(1000)
}
This always prints the same IP address, e.g. 216.58.212.206. To me it looks like none of the approaches specified above have any effect. However, maybe I am testing something else and not actually the value of TTL. Therefore, I have two questions:
what is the correct way to pass a security variable into a Play application?
how to test it?
To change the settings for DNS cache via java.security.Security you have to provide a custom application loader.
package modules
class ApplicationLoader extends GuiceApplicationLoader {
override protected def builder(context: Context): GuiceApplicationBuilder = {
java.security.Security.setProperty("networkaddress.cache.ttl", "1")
super.builder(context)
}
}
When you build this application loader you can enable it in your application.conf
play.application.loader = "modules.ApplicationLoader"
after that you could use your code above and check if the DNS cache is behaving like you set it up. But keep in mind that your system is accessing a DNS server which is caching itself so you wont see change then.
If you want to be sure that you get different addresses for google.com you should use an authority name server like ns1.google.com
If you want to write a test on that you could maybe write a test which requests the address and then waits for the specified amount of time until it resolves again. But with a DNS system out of your control like google.com this could be a problem, if you hit a DNS server with caching.
If you want to write such a check you could do it with
#RunWith(classOf[JUnitRunner])
class DnsTests extends FlatSpec with Matchers {
"DNS Cache ttl" should "refresh after 1 second"
in new WithApplicationLoader(new modules.ApplicationLoader) {
// put your test code here
}
}
As you can see you can put the custom application loader in the context of the application starting behind your test.

Play logging system

I'm a bit confused about the logging system in Play.
Without importing any logging library, I added this to my code:
Logger.debug("Data is: " + data)
It didn't cause a compilation error but at the same time, it didn't print anything in the terminal window where I started the activator(where I typed activator run).
After looking here https://www.playframework.com/documentation/2.5.x/ScalaLogging, I also tried:
val logger = Logger(this.getClass)
logger.debug("Data is: " + data)
However, again nothing is printed.
Why is this happening?
there is few log level You can set in application.conf according documentation.
# Root logger:
logger.root=ERROR
# Logger used by the framework:
logger.play=INFO
# Logger provided to your application:
logger.application=DEBUG
# Logger for a third party library
logger.org.springframework=INFO
Try set log level to debug in Your application.conf
Currently there is an issue in the default configuration for the logger in DEV mode https://github.com/playframework/playframework/issues/5842
The default level for applications is INFO so debug messages are not shown.
While that issue is not fixed the workaround is to override logback.xml
Following the example in https://www.playframework.com/documentation/2.5.x/SettingsLogger that defines the log level for application as DEBUG
1.Add below in build.sbt:
libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.9.0"
2.Import following in your controller:
import com.typesafe.scalalogging.Logger
3.Use
private val logger = Logger(this.getClass)
logger.warn("your messages in here.")

Running junit tests in maven ignores programmatic log4j2 setup

I have a particular JUnit test which processes a big data file and when the logging level is left on TRACE, it kills Eclipse - something to do with the console handling, which is not relevant to this question.
I often switch between running all my tests using m2e, in which case I don't need any debugging log output, and running individual tests, where I want often want to see the TRACE output.
To avoid the necessity of editing my log4j2.xml config every time, I coded the log4j config to increase the logging level to INFO in this particular test, like this from programmatically-change-log-level-in-log4j2:
#Before
public void beforeTest() {
LoggerContext ctx = (LoggerContext) LogManager.getContext(false);
Configuration config = ctx.getConfiguration();
LoggerConfig loggerConfig = config.getLoggerConfig(
LogManager.ROOT_LOGGER_NAME);
initialLogLevel = loggerConfig.getLevel();
loggerConfig.setLevel(Level.INFO);
}
But it has no effect.
If the "ROOT_LOGGER" that I am manipulating here represents the same logger as the <root> in my log4j2.xml, then this is not going to work, is it? I need to override all the other loggers, or shut it down completely, but how?
Could it be influenced by my use of slf4j as the log4j2 wrapper in all of my other classes?
I have tried getting hold of the Appenders and using append.stop() but that doesn't work.
You can put a log4j2 config file in src/test/resources/ directory. During unit tests that file will be used.

Why does my actorFor fail when deployed to a Akka Microkernel JAR?

Have a somewhat simple project deployed to a JAR. I am starting up a supervisor actor that confirms it is booting up by sending out the following log message:
[akka://service-kernel/user/Tracker] Starting new Tracker
However, when I go to reference the actor via actorFor locally with an sbt run, it finds it no problem. In production, I use the same .actorFor("akka://service-kernel/user/Tracker") and it throws a NullPointerException. I can confirm via the logs that in production, the Tracker has sent out its confirmation that it booted up.
Are there any issues when using a Microkernel deployed to a JAR to make actor references?
Edit
I am suspecting that both the way I reference the system and the way Akka treats the start up class are related to the issue. Since I have specified a start up class called ServiceKernel, I am performing the reference as such: ServiceKernel.system.actorFor. Will provide an answer if confirmed.
Confirmed that it was related to the startup class handling the Microkernel.
The ServiceKernel mentioned above is used in the start script to boot up the Microkernel JAR: ./start com.package.ServiceKernel. In an sbt shell, this isn't needed so the alternative class I provided works well for referencing an Actor System.
However, in a Microkernel the ServiceKernel appears to be using a different Actor System altogether, so if you reference that system (like I did) then actorFor lookups will always fail. I solved the problem by passing the system down into the boot classes into the specific class where I was making the actorFor reference and it worked. Did it like this (pseudo-code):
class ServiceKernel extends Bootable {
val system = ActorSystem("service-kernel")
def startup = {
system.actorOf(Props(new Boot(isDev, system))) ! Start
}
}
And then passing it to an HttpApi class:
class Boot(val isDev: Boolean, system: ActorSystem) extends Actor with SprayCanHttpServerApp {
def receive = {
case Start =>
// setup HTTP server
val service = system.actorOf(Props(new HttpApi(system)), "tracker-http-api")
}
}

Code coverage on Play! project

I have a Play! project where I would like to add some code coverage information. So far I have tried JaCoCo and scct. The former has the problem that it is based on bytecode, hence it seems to give warning about missing tests for methods that are autogenerated by the Scala compiler, such as copy or canEqual. scct seems a better option, but in any case I get many errors during tests with both.
Let me stick with scct. I essentially get errors for every test that tries to connect to the database. Many of my tests load some fixtures into an H2 database in memory and then make some assertions. My Global.scala contains
override def onStart(app: Application) {
SessionFactory.concreteFactory = Some(() => connection)
def connection() = {
Session.create(DB.getConnection()(app), new MySQLInnoDBAdapter)
}
}
while the tests usually are enclosed in a block like
class MySpec extends Specification {
def app = FakeApplication(additionalConfiguration = inMemoryDatabase())
"The models" should {
"be five" in running(app) {
Fixtures.load()
MyModels.all.size should be_==(5)
}
}
}
The line running(app) allows me to run a test in the context of a working application connected to an in-memory database, at least usually. But when I run code coverage tasks, such as scct coverage:doc, I get a lot of errors related to connecting to the database.
What is even more weird is that there are at least 4 different errors, like:
ObjectExistsException: Cache play already exists
SQLException: Attempting to obtain a connection from a pool that has already been shutdown
Configuration error [Cannot connect to database [default]]
No suitable driver found for jdbc:h2:mem:play-test--410454547
Why is that launching tests in the default configuration is able to connect to the database, while running in the context of scct (or JaCoCo) fails to initialize the cache and the db?
specs2 tests run in parallel by default. Play disables parallel execution for the standard unit test configuration, but scct uses a different configuration so it doesn't know not to run in parallel.
Try adding this to your Build.scala:
.settings(parallelExecution in ScctPlugin.ScctTest := false)
Alternatively, you can add sequential to the beginning of your test classes to force all possible run configurations to run sequentially. I've got both in my files still, as I think I had some problems with the Build.scala solution at one point when I was using an early release candidate of Play.
A better option for Scala code coverage is Scoverage which gives statement line coverage.
https://github.com/scoverage/scalac-scoverage-plugin
Add to project/plugins.sbt:
addSbtPlugin("com.sksamuel.scoverage" % "sbt-scoverage" % "1.0.1")
Then run SBT with
sbt clean coverage test
You need to add sequential in the beginning of your Specification.
class MySpec extends Specification {
sequential
"MyApp" should {
//...//
}
}