Fixtures in Play! 2 for Scala - scala

I am trying to do some integration testing in a Play! 2 for Scala application. For this, I need to load some fixtures to have the DB in a known state before each test.
At the moment, I am just invoking a method that executes a bunch of Squeryl statements to load data. But declaring the fixtures declaratively, either with a Scala DSL or in a language like JSON or YAML is more readable and easy to mantain.
In this example of a Java application I see that fixtures are loaded from a YAML file, but the equivalent Scala app resorts to manula loading, as I am doing right now.
I have also found this project which is not very well documented, and it seems a bit more complex than I'd like - it is not even clear to me where the fixture data is actually declared.
Are there any other options to load fixtures in a Play! application?

Use Evolutions. Write a setup and teardown script for the fixtures in SQL, or use mysqldump (or equivalent for your DB) to export an existing test DB as sql.
http://www.playframework.org/documentation/1.2/evolutions
I find the most stress-free way to do testing is to set everything up in an in-memory database which means tests run fast and drive the tests from Java using JUnit. I use H2DB, but there are a few gotchas you need to watch out for. I learned these the hard way, so this should save you some time.
Play has a nice system for setting up and tearing down your application for integration testing, using running( FakeAplication() ) { .. }, and you can configure it to use an in memory database with FakeApplication(additionalConfiguration = inMemoryDatabase()) see:
http://www.playframework.org/documentation/2.0/ScalaTest
OutOfMemory errors: However, running a sizeable test fixture a few times on my machine caused OutOfMemory errors. This seems to be because the default implementation of the inMemoryDatabase() function creates a new randomly named database and doesn't clean up the old ones between test runs. This isn't necessary if you've written your evolution teardown scripts correctly, because the database will be emptied out and refilled between each test. So we overrode this behaviour to use the same database and the memory issues disappeared.
DB Dialect: Another issue is that our production database is MySQL which has a number of incompatibilities with H2DB. H2DB has compatibility modes for a number of dbs, which should reduce the number of problems you have:
http://www.h2database.com/html/features.html#compatibility
Putting this all together makes it a little unwieldy to add before each test, so I extracted it into a function:
def memDB[T](code: =>T) =
running( FakeApplication( additionalConfiguration = Map(
"db.default.driver" -> "org.h2.Driver",
"db.default.url" -> "jdbc:h2:mem:test;MODE=MySQL"
) ) )(code)
You can then use it like so (specs example):
"My app" should {
"integrate nicely" in memDB {
.....
}
}
Every test will start a fake application, run your fixture setup evolutions script, run the test, then tear it all down again. Good luck!

Why not use the java example in Scala? That exact code should also work without modifications in Scala...

Related

What is the right way to implement setUpTestData style behaviour in py.test

I have a bunch of fixture stuff that I want to do once for the test class but I also don't want the associated tests messing with it.
I don't really get the py.test fixture system yet so I'm not seeing how this is supposed to be done.
In vanilla Django this stuff is achieved with setUpTestData which lets you create some fixtures in the DB once for the test class. Then at the start of each test case it drops a transaction savepoint and at the end of each test it resets to that save point. (This is in addition to it transactioning around the entire class so as to leave a clean db at the end.)
I could get this functionality by inheriting off Django's TransactionTestCase but pytest-django seems to want to run without that and I've achieved everything else I need without doing so.
I've looked around for this quite a lot, and the best I could find is this pytest plugin, which I have not tested myself:
https://github.com/tipsi/pytest-tipsi-django
Per the pytest-django issue list, it looks like this has been raised before, but there is no movement towards a fix:
https://github.com/pytest-dev/pytest-django/issues/514

ASP Boilerplate problems using Effort in unit testing with EFProf (Entity Framework Profiler)

Having issues using EFProf (http://www.hibernatingrhinos.com/products/EFProf) with ASP Boilerplate (http://www.aspnetboilerplate.com/).
For unit testing, ASP Boilerplate uses Effort (https://github.com/tamasflamich/effort) for mocking the database in-memory.
If I run the unit tests without adding the reference to EFProf, the tests run correctly (green).
If I add the initialization line:
HibernatingRhinos.Profiler.Appender.EntityFramework.EntityFrameworkProfiler.Initialize();
in either my test base ctor or my application project's Initialize(), I get the following error:
Castle.MicroKernel.ComponentActivator.ComponentActivatorException
ComponentActivator: could not instantiate MyApp.EntityFramework.MyAppDataContext
The inner exception has the relevant information:
Error: Unable to cast object of type 'Effort.Provider.EffortConnection' to type 'HibernatingRhinos.Profiler.Appender.ProfiledDataAccess.ProfiledConnection'.
Is Effort just not compatible with EFProf? Or am I doing something blindingly obvious wrong in my initialization?
Answering my own question: Effort fakes the DbContect object but does not actually create SQL for in-memory, thus there is nothing to intercept by profilers. It is also the reason why the CommandText is always null when using EF6's Database.Log with Effort.
Am going to try using Moq with EF6 to use an in-memory database implementation for testing as an alternative to Asp Boilerplate's testing project that utilizes Effort per this article: https://msdn.microsoft.com/en-us/library/dn314429(v=vs.113).aspx

FitNesse: automatic fixture stub generation

When I write a test in FitNesse I usually write several tables in wiki format first and then write the fixture code afterwards. I do that by executing the test in the wiki server and then create the fixture classes with names I copied from the error messages out of the failed execution of the test page.
This is an annoying process and could be done by an automatic stub generator, that creates the fixture classes with appropriate class names and method names.
Is there already such a generator available?
Not as far as I know. It sounds like you are using Fit, correct?
It sounds like an interesting feature, maybe you can create one as a plugin?

JUnit Fork-Mode in Java Classes

There's support for forkMode in Ant and Maven and occasionally we use it with value perTest. However, the JUnit-tests in Eclipse still fail when we run the tests on a class or on a project (Run As -> JUnit Test). Obviously JUnit uses default settings or behaviour and executes the tests in parallel causing some red crosses in the JUnit-view.
Is there a way to code something into the test-class that lets JUnit behave like the forkMode setting? We don't mind if there's an Eclipse-only solution for this.
Or can this be done with a Run Configuration in Eclipse?
EDIT:
I understand that the problems are based on data remaining after tests and further tests will fail due to that. While this makes sense, please understand that this doesn't answer my question. Think of my situation as being part of some sort of a Tiger Team. We have a bunch of issues and fixing that part of existing tests is just one of them. Trust me, we will try to cover everything... (I haven't heard that in a while)
Eclipse runs the JUnit test serially, in a single thread, in the same JVM. If you have tests that normally operate in parallel, this should not affect the test behavior. However, if you assume that you can change settings in the VM, like system properties, or class static variables, and the next test will not have those changes, that will break your tests.
The rule of thumb is that each test should leave the system (vm, database, filesystem) exactly as it found it so that each test can be run at any time, in any order.

How to run some but not all tests in a Perl test suite in parallel?

I've got a Perl-based test suite with 10,000+ tests that I would like to make run faster. I've tested using the -j flag to prove, and I have found that most-but-not-all of my tests are ready to run in parallel.
While I can work on making the remaining tests to be "parallel friendly", I expect there always be some tests which are not. What's a good way to manage this? I would like for it to be easy to run the whole set of tests efficiently, and make it easy to mark tests as "not-parallel-ready" if I need to.
Here are some options I see:
prove could be patched to support some tests as not-parallel-ready
Jenkins is being used to manage the test suite runs. I could split off the non-parallel tests into their own run. In other words, give up and use two test runs.
Perhaps there is a way to merge two TAP result streams together that I have yet to recover.
I'm not too concerned with how I will manage the list of exceptions. Either I can keep a list in a file as part of the test harness infrastructure, or I could put something in each test header that would mark it as such, and our test harness could determine the list of exceptions dynamically.
( The test suite is partially based on Test::Class, and I'll also be looking at Test::Class::Load to speed it up as well. )
I found a solution. It's in the documentation for aggregate_tests() for TAP::Harness. It includes a code sample for how I could write my own harness for this purpose:
...This is useful, for example, in the case where some tests should
run in parallel but others are unsuitable for parallel execution.
my $formatter = TAP::Formatter::Console->new;
my $ser_harness = TAP::Harness->new( { formatter => $formatter } );
my $par_harness = TAP::Harness->new(
{ formatter => $formatter,
jobs => 9
}
);
my $aggregator = TAP::Parser::Aggregator->new;
$aggregator->start();
$ser_harness->aggregate_tests( $aggregator, #ser_tests );
$par_harness->aggregate_tests( $aggregator, #par_tests );
$aggregator->stop();
$formatter->summary($aggregator);
From there it looks like I could:
Sub-class App::Prove and override _runtests(), which is where the new functionality above could be merged in.
Fork prove so that it calls My::App::Prove instead of App::Prove.
Now that I better understand how the pieces fit together I can see how I might create a patch for prove that would add an option like --exclude-from-parallel FILE, which would allow you to specify a file, which contains a list of test files to be excluded from parallel testing.
UPDATE 2012-08-16: I have a patch for prove now, and have submitted it for review. You can view and comment on the Pull Request. No summary is produced after the run output. It's not clear why.
I've now found the best solution so far to this problem. In turns out that prove has had undocumented support for marking some tests to be run in sequence instead of parallel since 2008. It's backed by a rather fancy "rules" system in TAP::Parser::Scheduler that allows for complex specifications of ordering arrangements for parallel and sequential test runs.
Here's the basic current recipe for prove:
# All tests are allowed to run in parallel, except those starting with "p"
--rules='seq=t/p*.t' --rules='par=**'
I have a new pull request that adds documentation for this feature, and have started a discussion about possibly offering a simpler syntax for basic exceptions as well. See the pull request for details.
I found another solution which advertised this feature, but I could only get trivial cases to work. It's to use Test::Steering. It allows me to do this:
include_tests( { jobs => 4 }, #parallel_tests );
include_tests( #serial_tests );
With this solution, be aware:
Before it actually works, I currently have to patch the code to fix a basic bug with it that has remained unpatched for multiple years.
Additional code is needed to handle generating the list of of parallel and serial tests to run.
I didn't actually get a combined summary for my real-world test... both sections emitted their own summary reports, so it didn't really work. Maybe I missed something, or maybe it's broken.
Test::Parallel also provides an easier way to run some tests in parallel
have a look at the sample from https://metacpan.org/pod/Test::Parallel
Another option: use a rules file for TAP::Harness.
You can build custom rules in a YAML file (called testrules.yml by default). I needed something similar to what you describe, which I was able to do with a testrules.yml file that looked like this:
---
seq:
# tests that are not parallel-ready (will run in isolation)
- seq:
- t/test1.t
- t/test2.t
# tests that can run in parallel
- par:
# wildcard for everything else
- **
In my case, I was using this with code that directly called App::Prove, rather than command-line prove. But I think it would work with prove too?