I've been following the Scala testing examples using Specs2 from the official Play documentation. I notice that they use WithApplication to start up a fake application to test against, with clode like the following:
"something" should {
"do X" in new WithApplication { /* ... */ }
"do Y" in new WithApplication { /* ... */ }
"do Z" in new WithApplication { /* ... */ }
}
This is fine and all, but the problem that I'm having is that I incur the cost of my application starting up each time this happens. This isn't necessarily "fast" or at least not fast enough once your test-suite grows to a reasonable size. I've tried doing things like:
val app = FakeApplication()
"something" should {
"do X" in new WithApplication(app) { /* ... */ }
"do Y" in new WithApplication(app) { /* ... */ }
"do Z" in new WithApplication(app) { /* ... */ }
}
and
"something" should {
val app = FakeApplication()
Helpers.running(app) {
"do X" in { /* ... */ }
"do Y" in { /* ... */ }
"do Z" in { /* ... */ }
}
}
The first seems to work for the first test and then complains about db connection issues on the later tests. I'm guessing something is getting shutdown here or something (not sure what).
The second doesn't work at all because it complains about there being no running application, which I'm not sure about either.
Any help is greatly appreciated. Thanks!
Well, it depends on what you want to test. If you're just unit testing code that has no external dependencies or dependencies that you can mock or stub out (and it would be a good idea to structure your code in such a way that allows this), then you don't need to use WithApplication. This is probably the best approach.
The first solution you provided doesn't work because applications can only be used once. It's WithApplication that starts and stops your application, so even if that did work, you wouldn't get any performance benefit.
The second solution you provided doesn't work because when the Helpers.running(app) { } code block runs, this is only declaring the specs. Specs puts all these in a list, and then you exit the running block and it shuts down the app. Then at some point later, specs runs the tests, and there's no application of course then.
So, if you can't test your code in isolation of the rest of your app, then you need to have a running app, there's nothing you can do about that, it's the reality of integration testing. And you probably want it started and shutdown between each test, otherwise your tests aren't running in isolation of each other.
It's outdated, but I'll give my answer. Since I faced same problem, and had similar idea. There is AfterAll & BeforeAll traits in spec2, maybe it was not there at the time of post, so my solution is basically:
package com.equipx.spec.util
import org.specs2.specification.{AfterAll, BeforeAll}
import play.Application
import play.api.Play
import play.test.{Helpers, FakeApplication}
/**
* #author Anton Oparin (antono#clemble.com)
*/
trait WithGlobalApplication extends BeforeAll with AfterAll {
protected var app: Application = null
/**
* Override this method to setup the application to use.
*
* By default this will call the old {#link #provideFakeApplication() provideFakeApplication} method.
*
* #return The application to use
*/
protected def provideApplication: Application = {
return provideFakeApplication
}
/**
* Old method - use the new {#link #provideApplication() provideApplication} method instead.
*
* Override this method to setup the fake application to use.
*
* #return The fake application to use
*/
protected def provideFakeApplication: FakeApplication = {
return Helpers.fakeApplication
}
override def beforeAll {
app = provideApplication
Helpers.start(app)
Play.current
}
override def afterAll {
if (app != null) {
Helpers.stop(app)
app = null
}
}
}
Basically I took WithApplication implementation, and made it global.
Related
It's my first time with Mockery for PHPUnit. I followed examples from this forum and still getting this error:
Mockery\Exception\InvalidCountException: Method all() from
Mockery_0_App_Card should be called exactly 1 times but called 0
times.
Basically, I'm injecting my model in my controller. Like this:
class CardController extends Controller
{
protected $repository;
function __construct(Model $repository)
{
$this->repository = $repository;
}
public function index()
{
$data = $this->repository->all();
return $data;
}
}
And trying testing like this:
class CardTest extends TestCase
{
protected $mock;
protected function setUp(): void
{
parent::setUp();
$this->mock = Mockery::mock('Model', '\App\Card');
}
public function testCardsList()
{
$this->mock->shouldReceive('all')
->once()
->andReturn(json_encode([[
"id"=> 1,
"name"=> "Aut modi quasi corrupti.",
"content"=> "..."
],
[
"id"=> 2,
"name"=> "Voluptas quia distinctio.",
"content"=> "..."
]]));
$this->app->instance('\App\Card', $this->mock);
$response = $this->json('GET', $this->api.'/cards');
$this->assertEquals(200, $response->status(), 'Response code must be 200');
}
}
I've tried a couple of variants but it's always the same. Like, setting Mockery in the controller or using Card::class notation. Any clue?
Also, I'm sure that the response is pulling data from DB and not using the array I provided. So, Mockery is having no incidence on my model.
After some readings, I'm convinced that testing with a SQLite DB is way better than creating mockups for Models. You don't have to work that much creating mockups. I'm linking to some discussion threads about how to implement the test environment, but I'm also pasting the code the I ended up writing.
Basically, you have to configure the DB to be SQLite. And you'll declare that it will run in memory. That's much faster than using a file.
Then, you want to run your migrations. And in my case, also seed the DB.
<?php
namespace Tests;
use DirectoryIterator;
use Illuminate\Foundation\Testing\TestCase as BaseTestCase;
use Illuminate\Support\Facades\Artisan;
use Illuminate\Support\Facades\Config;
abstract class TestCase extends BaseTestCase
{
use CreatesApplication;
protected function setUp()
{
parent::setUp();
Config::set('database.connections.sqlite.database', ':memory:');
Config::set('database.default', 'sqlite');
Artisan::call('migrate');
Artisan::call('db:seed');
protected function tearDown()
{
Artisan::call('migrate:reset');
parent::tearDown();
}
}
There's a caveat: setUp() is called once for each test. And I find out that this kind of notation will not work because the DB will be regenerated each time:
#depends testCreateCard
I was using that notation to pass an ID from testCreate() to several other methods. But ended up trusting in my seeds and using a hardcoded value.
Refs:
Laravel 5 - Using Mockery to mock Eloquent model
https://www.patrickstephan.me/post/setting-up-a-laravel-5-test-database.html
https://laracasts.com/discuss/channels/testing/how-to-specify-a-testing-database-in-laravel-5
https://laracasts.com/discuss/channels/general-discussion/how-to-migrate-a-testing-database-in-laravel-5
I know that in Play! using Scala that there is no Http.context available since the idea is to leverage implicits to pass any data around your stack. However, this seems like kind of a lot of boiler plate to pass through when you need a piece of information available for the entire context.
More specifically what I'm interested in is tracking a UUID that is passed from the request header and making it available to any logger so that each request gets its own unique identifier. I'd like this to be seamless from anyone who calls into a logger (or log wrapper)
Coming from a .NET background the http context flows with async calls, and this is also possible with the call context in WCF. At that point you can register a function with the logger to return the current uuid for the request based on a logging pattern of something like "%requestID%".
Building a larger distributed system you need to be able to correlate requests across multiple stacks.
But, being new to scala and play I'm not even sure where to look for a way to do this?
What you are looking for in Java is called the Mapped Diagnostic Context or MDC (at least by SLF4J) - here's an article I found that details how to set this up for Play. In the interest of preserving the details for future visitors here is the code used for an MDC-propagating Akka dispatcher:
package monitoring
import java.util.concurrent.TimeUnit
import akka.dispatch._
import com.typesafe.config.Config
import org.slf4j.MDC
import scala.concurrent.ExecutionContext
import scala.concurrent.duration.{Duration, FiniteDuration}
/**
* Configurator for a MDC propagating dispatcher.
* Authored by Yann Simon
* See: http://yanns.github.io/blog/2014/05/04/slf4j-mapped-diagnostic-context-mdc-with-play-framework/
*
* To use it, configure play like this:
* {{{
* play {
* akka {
* actor {
* default-dispatcher = {
* type = "monitoring.MDCPropagatingDispatcherConfigurator"
* }
* }
* }
* }
* }}}
*
* Credits to James Roper for the [[https://github.com/jroper/thread-local-context-propagation/ initial implementation]]
*/
class MDCPropagatingDispatcherConfigurator(config: Config, prerequisites: DispatcherPrerequisites)
extends MessageDispatcherConfigurator(config, prerequisites) {
private val instance = new MDCPropagatingDispatcher(
this,
config.getString("id"),
config.getInt("throughput"),
FiniteDuration(config.getDuration("throughput-deadline-time", TimeUnit.NANOSECONDS), TimeUnit.NANOSECONDS),
configureExecutor(),
FiniteDuration(config.getDuration("shutdown-timeout", TimeUnit.MILLISECONDS), TimeUnit.MILLISECONDS))
override def dispatcher(): MessageDispatcher = instance
}
/**
* A MDC propagating dispatcher.
*
* This dispatcher propagates the MDC current request context if it's set when it's executed.
*/
class MDCPropagatingDispatcher(_configurator: MessageDispatcherConfigurator,
id: String,
throughput: Int,
throughputDeadlineTime: Duration,
executorServiceFactoryProvider: ExecutorServiceFactoryProvider,
shutdownTimeout: FiniteDuration)
extends Dispatcher(_configurator, id, throughput, throughputDeadlineTime, executorServiceFactoryProvider, shutdownTimeout ) {
self =>
override def prepare(): ExecutionContext = new ExecutionContext {
// capture the MDC
val mdcContext = MDC.getCopyOfContextMap
def execute(r: Runnable) = self.execute(new Runnable {
def run() = {
// backup the callee MDC context
val oldMDCContext = MDC.getCopyOfContextMap
// Run the runnable with the captured context
setContextMap(mdcContext)
try {
r.run()
} finally {
// restore the callee MDC context
setContextMap(oldMDCContext)
}
}
})
def reportFailure(t: Throwable) = self.reportFailure(t)
}
private[this] def setContextMap(context: java.util.Map[String, String]) {
if (context == null) {
MDC.clear()
} else {
MDC.setContextMap(context)
}
}
}
You can then set values in the MDC using MDC.put and remove it using MDC.remove (alternatively, take a look at putCloseable if you need to add and remove some context from a set of synchronous calls):
import org.slf4j.MDC
// Somewhere in a handler
MDC.put("X-UserId", currentUser.id)
// Later, when the user is no longer available
MDC.remove("X-UserId")
and add them to your logging output using %mdc{field-name:default-value}:
<!-- an example from the blog -->
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} %coloredLevel %logger{35} %mdc{X-UserId:--} - %msg%n%rootException</pattern>
</encoder>
</appender>
There are more details in the linked blog post about tweaking the ExecutionContext that Play uses to propagate the MDC context correctly (as an alternative approach).
I'm currently using mongoDB and I wanted to be available to run integration and funcional tests on any machine (currently in dedicated build server and in a future CI server).
The main problem is that I have to be able to check mongodb installation (and if not present, install it), start a mongodb instance on startup and shut it down once the process has finished.
There's an already developed question here Embedded MongoDB when running integration tests that suggests installing a gradle or maven plugin.
This gradle plugin https://github.com/sourcemuse/GradleMongoPlugin/ can do this, but I will have to manage my dependencies with it, already tried. The problem with this approach is not gradle itself, but when tried this I've lost all the benefits from my IDE (STS, intellij).
Did anyone managed to do this?
If someone configured gradle with a grails project withour losing the grails perspective, I will appreciate that help too!
Thanks!
Trygve.
I have recently created a grails plugin for this purpose. https://github.com/grails-plugins/grails-embedded-mongodb
Currently it is in snapshot, however I plan to publish a release this week
I've had good results using an in-memory Mongo server for integration tests. It runs fast and doesn't require starting up a separate Mongo server or dealing with special grails or maven config. This means that the tests can run equally well with any JUnit test runner, i.e. within any IDE or build system. No extra setup required.
In-memory Mongo example
I have also used the "flapdoodle" embedded mongo server for testing. It uses a different approach in that it downloads and executes a separate process for a real Mongo instance. I have found that this mechanism has more moving parts and seems to be overkill for me when all I really want to do is verify that my app works correctly with a mongo server.
Better answer late than never -
Unfortunately I found that Fongo does not address all of my requirements quite well - most notably, $eval is not implemented so that you cannot run integration tests with migration tools such as Mongeez.
I settled for EmbedMongo, which I am using in my Spock/Geb integration tests via JUnit ExternalResource rules. Even though Gary is right when he says that a real managed DB comes with many more moving parts, but I found that I'd rather take that risk than rely on a mock implementation. So far it worked quite well, give or take an unclean database shutdown during test suite teardown, which fortunately does not impact the tests. You would use the rules as follows:
#Integration(applicationClass = Application)
#TestFor(SomeGrailsArtifact) // this will inject grailsApplication
class SomeGrailsArtifactFunctionalSpec extends Specification {
#Shared #ClassRule
EmbedMongoRule embedMongoRule = new EmbedMongoRule(grailsApplication)
#Rule
ResetDatabaseRule resetDatabaseRule = new ResetDatabaseRule(embedMongoRule.db)
...
For the sake of completeness, these are the rule implementations:
EmbedMongoRule.groovy
import org.junit.rules.ExternalResource
import com.mongodb.MongoClient
import com.mongodb.MongoException
import de.flapdoodle.embed.mongo.MongodProcess
import de.flapdoodle.embed.mongo.MongodStarter
import de.flapdoodle.embed.mongo.config.IMongodConfig
import de.flapdoodle.embed.mongo.config.MongodConfigBuilder
import de.flapdoodle.embed.mongo.config.Net
import de.flapdoodle.embed.mongo.distribution.Version
import de.flapdoodle.embed.process.runtime.Network
/**
* Rule for {#code EmbedMongo}, a managed full-fledged MongoDB. The first time
* this rule is used, it will download the current production MongoDB release,
* spin it up before tests and tear it down afterwards.
*
* #author Michael Jess
*
*/
public class EmbedMongoRule extends ExternalResource {
private def mongoConfig
private def mongodExecutable
public EmbedMongoRule(grailsApplication) {
if(!grailsApplication) {
throw new IllegalArgumentException(
"Got null grailsApplication; have you forgotten to supply it to the rule?\n" +
"\n" +
"#Integration(applicationClass = Application)\n" +
"#TestFor(MyGrailsArtifact)\n // will inject grailsApplication" +
"class MyGrailsArtifactSpec extends ... {\n" +
"\n" +
"\t..." +
"\t#Shared #ClassRule EmbedMongoRule embedMongoRule = new EmbedMongoRule(grailsApplication)\n" +
"\t...\n" +
"}")
}
mongoConfig = grailsApplication.config.grails.mongodb
}
#Override
protected void before() throws Throwable {
try {
MongodStarter starter = MongodStarter.getDefaultInstance()
IMongodConfig mongodConfig = new MongodConfigBuilder()
.version(Version.Main.PRODUCTION)
.net(new Net(mongoConfig.port, Network.localhostIsIPv6()))
.build()
mongodExecutable = starter.prepare(mongodConfig)
MongodProcess mongod = mongodExecutable.start()
} catch (IOException e) {
throw new IllegalStateException("Unable to start embedded mongo", e)
}
}
#Override
protected void after() {
mongodExecutable.stop()
}
/**
* Returns a new {#code DB} for the managed database.
*
* #return A new DB
* #throws IllegalStateException If an {#code UnknownHostException}
* or a {#code MongoException} occurs
*/
public def getDb() {
try {
return new MongoClient(mongoConfig.host, mongoConfig.port).getDB(mongoConfig.databaseName)
} catch (UnknownHostException | MongoException e) {
throw new IllegalStateException("Unable to retrieve MongoClient", e)
}
}
}
ResetDatabaseRule.groovy - currently not working since GORM ignores the grails.mongodb.databaseName parameter as of org.grails.plugins:mongodb:4.0.0 (grails 3.x)
import org.junit.rules.ExternalResource
/**
* Rule that will clear whatever Mongo {#code DB} is provided.
* More specifically, all non-system collections are dropped from the database.
*
* #author Michael Jess
*
*/
public class ResetDatabaseRule extends ExternalResource {
/**
* Prefix identifying system tables
*/
private static final String SYSTEM_TABLE_PREFIX = "system"
private def db
/**
* Create a new database reset rule for the specified datastore.
*
* #param getDb Closure returning a reference to the {#link DB} instance
* to reset.
*/
ResetDatabaseRule(db) {
this.db = db
}
#Override
protected void before() throws Throwable {
db.collectionNames
.findAll { !it.startsWith(SYSTEM_TABLE_PREFIX) }
.each { db.getCollection(it).drop() }
}
}
I have an abstract class where source code looks like this:
/*
* #assert (0) == NULL
*/
public static function factory($num) {
if ($num==0)
return NULL;
//do some other stuff
}
If I delete the previously generated test file and use the "Create PHPUnit tests", it creates a new unit test file that doesn't seem to have taken the assert into account at all:
/**
* #covers {className}::{origMethodName}
* #todo Implement testFactory().
*/
public function testFactory() {
// Remove the following lines when you implement this test.
$this->markTestIncomplete(
'This test has not been implemented yet.'
);
}
I must be doing something silly, but I can't figure out what. Is the failure to expand the class name and method name in the generated #covers annotation perhaps a clue?
I'm running NetBeans 7.0.1 on a Mac with PHP 5.3.6 and PHPUnit 3.6.2.
All annotations must appear in DocBlock comments which start with /** and not /*. You're missing an asterisk.
/**
* #assert (0) == NULL
*/
public static function factory($num) {
Can I have dependencies between scalaTest specs such that if a test fails, all tests dependent on it are skipped?
I didn't add that feature of TestNG because I didn't at the time have any compelling use cases to justify it. I have since collected some use cases, and am adding a feature to the next version of ScalaTest to address it. But it won't be dependent tests, just a way to "cancel" a test based on an unmet precondition.
In the meantime what you can do is simply use Scala if statements to only register tests if the condition is met, or to register them as ignored if you prefer to see it output. If you are using Spec, it would look something like:
if (databaseIsAvailable) {
it("should do something that requires the database") {
// ...
}
it ("should do something else that requires the database") {
}
}
This will only work if the condition will be met for sure at test construction time. If the database for example is supposed to be started up by a beforeAll method, perhaps, then you'd need to do the check inside each test. And in that case you could say it is pending. Something like:
it("should do something that requires the database") {
if (!databaseIsAvailable) pending
// ...
}
it("should do something else that requires the database") {
if (!databaseIsAvailable) pending
// ...
}
Here is a Scala trait that makes all test in the test suite fail, if any test fails.
(Thanks for the suggestion, Jens Schauder (who posted another answer to this question).)
Pros: Simple-to-understand test dependencies.
Cons: Not very customizable.
I use it for my automatic browser tests. If something fails, then usually there's no point in continuing interacting with the GUI since it's in a "messed up" state.
License: Public domain (Creative Common's CC0), or (at your option) the MIT license.
import org.scalatest.{Suite, SuiteMixin}
import scala.util.control.NonFatal
/**
* If one test fails, then this traits cancels all remaining tests.
*/
trait CancelAllOnFirstFailure extends SuiteMixin {
self: Suite =>
private var anyFailure = false
abstract override def withFixture(test: NoArgTest) {
if (anyFailure) {
cancel
}
else try {
super.withFixture(test)
}
catch {
case ex: TestPendingException =>
throw ex
case NonFatal(t: Throwable) =>
anyFailure = true
throw t
}
}
}
I don't know about a ready made solution. But you can fairly easily write your own Fixtures.
See "Composing stackable fixture traits" in the javadoc of the Suite trait
Such a fixture could for example replace all test executions after the first one with calls to pending
You can use trait org.scalatest.CancelAfterFailure to cancel remaining tests after first failure:
import org.scalatest._
class MySpec extends FunSuite with CancelAfterFailure {
test("successfull test") {
succeed
}
test("failed test") {
assert(1 == 0)
}
test("this test and all others will be cancelled") {
// ...
}
}