I have a Play! project where I would like to add some code coverage information. So far I have tried JaCoCo and scct. The former has the problem that it is based on bytecode, hence it seems to give warning about missing tests for methods that are autogenerated by the Scala compiler, such as copy or canEqual. scct seems a better option, but in any case I get many errors during tests with both.
Let me stick with scct. I essentially get errors for every test that tries to connect to the database. Many of my tests load some fixtures into an H2 database in memory and then make some assertions. My Global.scala contains
override def onStart(app: Application) {
SessionFactory.concreteFactory = Some(() => connection)
def connection() = {
Session.create(DB.getConnection()(app), new MySQLInnoDBAdapter)
}
}
while the tests usually are enclosed in a block like
class MySpec extends Specification {
def app = FakeApplication(additionalConfiguration = inMemoryDatabase())
"The models" should {
"be five" in running(app) {
Fixtures.load()
MyModels.all.size should be_==(5)
}
}
}
The line running(app) allows me to run a test in the context of a working application connected to an in-memory database, at least usually. But when I run code coverage tasks, such as scct coverage:doc, I get a lot of errors related to connecting to the database.
What is even more weird is that there are at least 4 different errors, like:
ObjectExistsException: Cache play already exists
SQLException: Attempting to obtain a connection from a pool that has already been shutdown
Configuration error [Cannot connect to database [default]]
No suitable driver found for jdbc:h2:mem:play-test--410454547
Why is that launching tests in the default configuration is able to connect to the database, while running in the context of scct (or JaCoCo) fails to initialize the cache and the db?
specs2 tests run in parallel by default. Play disables parallel execution for the standard unit test configuration, but scct uses a different configuration so it doesn't know not to run in parallel.
Try adding this to your Build.scala:
.settings(parallelExecution in ScctPlugin.ScctTest := false)
Alternatively, you can add sequential to the beginning of your test classes to force all possible run configurations to run sequentially. I've got both in my files still, as I think I had some problems with the Build.scala solution at one point when I was using an early release candidate of Play.
A better option for Scala code coverage is Scoverage which gives statement line coverage.
https://github.com/scoverage/scalac-scoverage-plugin
Add to project/plugins.sbt:
addSbtPlugin("com.sksamuel.scoverage" % "sbt-scoverage" % "1.0.1")
Then run SBT with
sbt clean coverage test
You need to add sequential in the beginning of your Specification.
class MySpec extends Specification {
sequential
"MyApp" should {
//...//
}
}
Related
I have a pytest suite running in this env:
Test session starts (platform: linux, Python 3.6.1, pytest 3.3.1, pytest-sugar 0.9.1)
plugins: flaky-3.5.3, dependency-0.3.2, forked-0.2, logger-0.4.0, sugar-0.9.1, xdist-1.24.1
I have a parametrized test, decorated with flaky, and it is supposed to be re-run max three times if it fails.
#pytest.mark.flaky(max_runs=3) # re-run this test in case it fails
def test_cucubau(getBauBau_fixture):
assert cucubau(getBauBau_fixture) == True
However, it fails only once, it is not re-run, and my flaky test report is empty.
===Flaky Test Report===
===End Flaky Test Report===
Based on what I read about flaky plugin, the usage should be trivial.. but I'm not able to see what is wrong with my code.
any idea?
I believe you need the pytest-rerunfailures plugin for that to work. Then you should be able to annotate your test with #pytest.mark.flaky(reruns=3).
I have a particular JUnit test which processes a big data file and when the logging level is left on TRACE, it kills Eclipse - something to do with the console handling, which is not relevant to this question.
I often switch between running all my tests using m2e, in which case I don't need any debugging log output, and running individual tests, where I want often want to see the TRACE output.
To avoid the necessity of editing my log4j2.xml config every time, I coded the log4j config to increase the logging level to INFO in this particular test, like this from programmatically-change-log-level-in-log4j2:
#Before
public void beforeTest() {
LoggerContext ctx = (LoggerContext) LogManager.getContext(false);
Configuration config = ctx.getConfiguration();
LoggerConfig loggerConfig = config.getLoggerConfig(
LogManager.ROOT_LOGGER_NAME);
initialLogLevel = loggerConfig.getLevel();
loggerConfig.setLevel(Level.INFO);
}
But it has no effect.
If the "ROOT_LOGGER" that I am manipulating here represents the same logger as the <root> in my log4j2.xml, then this is not going to work, is it? I need to override all the other loggers, or shut it down completely, but how?
Could it be influenced by my use of slf4j as the log4j2 wrapper in all of my other classes?
I have tried getting hold of the Appenders and using append.stop() but that doesn't work.
You can put a log4j2 config file in src/test/resources/ directory. During unit tests that file will be used.
I have a class with a custom logger. Here's a trivial example:
package models
import play.Logger
object AModel {
val log = Logger.of("amodel")
def aMethod() {
if (! log.isInfoEnabled) log.error("Can't log info...")
log.info("Logging aMethod in AModel")
}
}
and then we'll enable this logger in application.conf:
logger.amodel=DEBUG
and in development (Play console, use run) this logger does indeed log. But in production, once we hit the message
[info] play - Application started (Prod)
loggers defined like the above logger fail to log any further and instead we go through the error branch. It seems their log level has been changed to ERROR.
Is there anyway to correct this undesirable state of affairs? Is there special configuration for production logs?
edit
Play's handling of logs in production is a source of difficulty to more than a few people... https://github.com/playframework/playframework/issues/1186
For some reason it ships its own logger.xml which overrides application.conf.
When attempting to run integration tests, I've run into a baffling problem where the JVM will hang, using 100% of the CPU. The test that comes with the new Play application works correctly, but as soon as it requires database interaction, it will hang indefinitely. For all other unit tests, everything runs smoothly connecting to a mysql database on localhost. I'd like to be able to use that same setup with my integration tests.
Here is an example of a test that will hang upon calling browser.goTo("/")
import org.specs2.mutable._
import play.api.test._
import play.api.test.Helpers._
class TestSpec extends Specification {
"Application" should {
"work from within a browser" in new WithBrowser(webDriver = HTMLUNIT, app = FakeApplication()) {
browser.goTo("/")
println(browser.pageSource)
browser.$("#email").text("test#fakeemail.com")
browser.$("#password").text("password")
browser.$("#loginbutton").click()
browser.pageSource must not contain("Sign in")
browser.pageSource must contain("Logout")
}
}
}
The issue in my case was the selenium version. Adding this line to appDependencies in Build.scala will upgrade selenium:
"org.seleniumhq.selenium" % "selenium-java" % "2.35.0" % "test"
From there I was able to use both HTMLUNIT and FIREFOX for web drivers without any issues.
Have you tried setting a port such as 3333 then using your localhost?
browser.goTo("http://localhost:3333/")
Have you solved this? I have the same problem, it also hangs with simple route(FakeRequest) if there is any db connection.
I solved this by setting (Build.scala) :
.settings( parallelExecution in Test := false)
It helped me with FakeRequest, but selenium tests still hang.
I am trying to get Integration Tests to work in Play Framework 2.1.1
My goal is to be able to run Integration Tests after unit tests to test component level functionality with the underlying database. The underlying database will have stored procedures, so it is essential that I am able to do more than just the "inMemoryDatabase" that you can configure in the Fake Application.
I would like the process to be:
Start TestServer with FakeApplication. Use an alternative integration test conf file
Run Integration tests (can filter by package or what not here)
Stop TestServer
I believe the best way to do this is in the Build.scala file.
I need assistance for how to setup the Build.scala file, as well as how to load the alternative integration test config file (project/it.conf right now)
Any assistance is greatly appreciated!
I have introduced a method that works for the time being. I would like to see Play introduce the concept of separate "Test" vs "IntegrationTest" scopes in sbt.
I could go into Play and see how they build their project and settings in sbt and try to get IntegrationTest scope to work. Right now, I spent too much time trying to get it functioning.
What I did was to create a Specs Around Scope class that gives me the ability to enforce a singleton instance of a TestServer. Anything that uses the class will attempt to start the test server, if it is already running, it won't be restarted.
It appears as though Play and SBT do a good job of making sure the server is shut down when the test ends, which works so far.
Here is the sample code. Still hoping for some more feedback.
class WithTestServer(val app: FakeApplication = FakeApplication(),
val port: Int = Helpers.testServerPort) extends Around with Scope {
implicit def implicitApp = app
implicit def implicitPort: Port = port
synchronized {
if ( !WithTestServer.isRunning ) {
WithTestServer.start(app, port)
}
}
// Implements around an example
override def around[T: AsResult](t: => T): org.specs2.execute.Result = {
println("Running test with test server===================")
AsResult(t)
}
}
object WithTestServer {
var singletonTestServer: TestServer = null
var isRunning = false
def start(app: FakeApplication = FakeApplication(), port: Int = Helpers.testServerPort) = {
implicit def implicitApp = app
implicit def implicitPort: Port = port
singletonTestServer = TestServer(port, app)
singletonTestServer.start()
isRunning = true
}
}
To take this a step further, I simply setup two folders (packages) in the play/test folder:
- test/unit (test.unit package)
- test/integration (test.integration pacakage)
Now, when I run from my Jenkins server, I can run:
play test-only test.unit.*Spec
That will execute all unit tests.
To run my integration tests, I run:
play test-only test.integration.*Spec
That's it. This works for me for the time being until Play adds Integration Test as a lifecycle step.
The answer for this is shared in this blog post https://blog.knoldus.com/integration-test-configuration-in-play-framework/
Basically, in build.sbt:
// define a new configuration
lazy val ITest = config("it") extend(Test)
/// and add it to your project:
lazy val yourProject = (project in file("yourProject"))
.configs(ITest)
.settings(
inConfig(ITest)(Defaults.testSettings),
ITest / scalaSource := baseDirectory.value / "it",
[the rest of your configuration comes here])
.enablePlugins(PlayScala)
Just tested this in 2.8.3 and works like a charm.
Lauch your ITs from sbt using:
[yourProject] $ it:test