Scala: Acceptance Testing Order - scala

for reference: How make tests always run in same order in Scalatest?
I plan to test my application by calling controllers/routes and comparing the responses to my expected ones.
I do not want to mock my persistence layer, so I can test it too. My approach would now be to execute tests in order to reflect user actions. Example:
Test 1: User registers
--> Test 2: (depends on a existing user) User creates profile
--> Test 3: (depends on a user with existing profile) User changes profile
So to save time, I do not want to mock anything for Test 2 and Test 3 but instead just work on the same database all the time and use the data generated by preceding tests.
Is this approach ok and how would one specify the execution order in Specs2 or ScalaTest?

Having no dependencies between individual test suites is preferred for at least two reasons:
Being concerned with the order in which the suites are executed makes the test execution harder to understand
If suite A depends on suite B, changing something in suite B may break suite A, meaning it is harder to find the cause of a failing test.
Because of these drawbacks I would recommend you to properly setup your persistence layer at the beginning of each acceptance test; at the expense of execution time. Note that you can tag your tests and only execute your slow acceptance tests occasionally to not slow down your development cycles.
If you want to implement dependent tests in ScalaTest nonetheless, you can create a nested test suite as is suggested in the question you linked:
Assuming your persistence layer:
object Users {
var users: List[User] = Nil
def apply(i: Int): User = users(i)
def register(user: User): Unit = users = user :: users
def isEmpty: Boolean = users.isEmpty
}
class User(var profile: Option[Profile] = None) {
def createProfile(): Unit = profile = Some(new Profile)
}
class Profile(var content: String = "") {
def update(newContent: String): Unit = content = newContent
}
and your individual tests:
#DoNotDiscover
class Test1 extends FlatSpec with ShouldMatchers {
"register" should "store a new user" in {
Users.register(new User)
Users should not be 'empty
}
}
#DoNotDiscover
class Test2 extends FlatSpec with ShouldMatchers {
"createProfile" should "create a new user profile" in {
val user = Users(0)
user.createProfile()
user.profile shouldBe 'defined
}
}
#DoNotDiscover
class Test3 extends FlatSpec with ShouldMatchers {
"update" should "update the content of the profile" in {
val newContent = "Test"
val profile = Users(0).profile.get
profile.update(newContent)
profile.content shouldBe newContent
}
}
you can nest them in an acceptance test suite:
class AcceptanceTests extends Suites(
new Test1,
new Test2,
new Test3
) with SequentialNestedSuiteExecution
The #DoNotDiscover annotation is necessary to prevent the test runner from executing the nested tests separatly (as they are itself test suites). Mixing in the trait SequentialNestedSuiteExecution guarantees that the nested tests are executed in the given order.

Related

Run single test using Intellij linking to Suites class from scala test

Hi everybody I have these suites:
#DoNotDiscover
class Suite1 extends FunSuite with Suite{
test("Test1"){
println("Running test1")
println("Using data inserted by MasterDataSuite")
}
}
#DoNotDiscover
class Suite2 extends FunSuite with Suite{
test("Test2"){
println("Running test2")
println("Using data inserted by MasterDataSuite")
}
}
class MasterDataSuite extends Suites(new Suite1,new Suite2) with BeforeAndAfterAllConfigMap {
override def beforeAll(configMap: ConfigMap) {
//inserting in DB
}
override def afterAll(configMap: ConfigMap) {
//deleting in DB
}
}
If I run by command line, the data necessary in MasterData class is inserted, all good so far.
My big problem is if I want to run a single test in Suite1 or Suite2 the test will always fail because it can not find the data necessary to run successfully (this data should be inserted by beforeAll method). I am not sure if I need to set up something in Intellij, or I need to set up something from code or If it is possible.
Quick notes:
Why use that instead of before and after per Suite?: A: Because There are hundreds of tests that already exist in the project. Then is so long check test by test to guarantee correct data per Suite.
That works in the pipeline and by command line.
Thanks in advance

When I run my test suites they fail with PSQLException: FATAL: sorry, too many clients already

I'm writing tests for my Play application and I want to run them with a real server so that I can fake all the answers from the external services.
In order to do that I extend PlaySpec and GuiceOneServerPerSuite and I override the method fakeApplication to create my routes and give them to the Guice Application
class MySpec extends PlaySpec with GuiceOneServerPerSuite {
override def fakeApplication(): Application =
GuiceApplicationBuilder().appRoutes(app => {
case ("POST", "/url/") => app.injector.instanceOf(classOf[DefaultActionBuilder]) { Ok }
}).globalApp(true).build()
"Something" should {
"work well" in {
val wsClient = app.injector.instanceOf[WSClient]
val service = new MyService(wsClient)
service.method() mustBe ""
app.injector.instanceOf[DBApi].databases().foreach(_.getConnection().close())
}
}
}
I have multiple test suites like this one and if I run them alone they work fine, but if I run them all together they fill up the connection pool and then everything fails with: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already.
My considerations: I think it happens because at each test suite a new Play Guice Application is created. I also tried to close the connections of all databases manually but didn't solve the problem.
We had the same problems, so we are separating these 2 use cases (running all or just one Test-Suite).
This makes running all tests much faster - as Play Environment is only started once.
The Suite looks like:
class AcceptanceSpecSuite
extends PlaySpec
with GuiceOneAppPerSuite
with BeforeAndAfter {
// all specs
override def nestedSuites: immutable.IndexedSeq[AcceptanceSpec] = Vector(
// api
new DatabaseTaskSpec,
new HistoryPurgeTaskSpec,
...
)
override def fakeApplication(): Application =
// your initialization
}
Now each Spec looks like:
#DoNotDiscover // important that it is run only if called explicitly
class DatabaseTaskSpec extends AcceptanceSpec {
...
The Parent class now we can switch between GuiceOneServerPerSuite and ConfiguredApp:
trait AcceptanceSpec
extends PlaySpec
you need:
// with GuiceOneServerPerSuite // if you want to test only one Test
with ConfiguredApp // if you want to test all
with Logging
with ScalaFutures
with BeforeAndAfter {
...
I know it's a bit of a hack - so I am also interested in a more elegant solution;).
You can put your DB instance as a singleton, if you do that, he won´t create multiple instance, therefore won´t fill the connection pool.
Something like that:
#Singleton
object TestDBProperties extends DBProperties {
override val db: Database = Database.forURL(
url = "jdbc:h2:mem:testdb;MODE=MYSQL;DB_CLOSE_DELAY=-1;DATABASE_TO_UPPER=FALSE;",
driver = "org.h2.Driver")
}
Hope this helps.

Specs2 setting up environment before and after the whole suit

I'm writing some specc2 integration tests for my spray.io project that uses dynamodb. I'm using sbt-dynamodb to load a local dynamodb into the environment. I use the following pattern to load my tables before the tests are run.
trait DynamoDBSpec extends SpecificationLike {
val config = ConfigFactory.load()
val client = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
lazy val db = {
client.setEndpoint(config.getString("campaigns.db.endpoint"))
new DynamoDB(client)
}
override def map(fs: =>Fragments): Fragments =
Step(beforeAll) ^ fs ^ Step(afterAll)
protected def beforeAll() = {
//load my tables
}
protected def afterAll() = {
//delete my tables
}
}
Then any test class can just be extended with DynamoDBSpec and the tables will be created. It all works fine, until extend DynamoDBSpec from more than one test class, during which it throws an ResourceInUseException: 'Cannot create preexisting table'. The reason is that they execute in parallel, thus it wants to execute table creation at the same time.
I tried to overcome it by running the tests in sequential mode, but beforeall and afterall are still executed in parallel.
Ideally I think it would be good to create the tables before the entire suite runs instead of each Spec class invocation, and then tear them down after the entire suite completes. Does anyone know how to accomplish that?
There are 2 ways to achieve this.
With an object
You can use an object to synchronize the creation of your database
object Database {
lazy val config = ConfigFactory.load()
lazy val client =
new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
// this will only be done once in
// the same jvm
lazy val db = {
client.setEndpoint(config.getString("campaigns.db.endpoint"))
val database = new DynamoDB(client)
// drop previous tables if any
// and create new tables
database.create...
database
}
}
// BeforeAll is a new trait in specs2 3.x
trait DynamoDBSpec extends SpecificationLike with BeforeAll {
//load my tables
def beforeAll = Database.db
}
As you can see, in this model we don't remove tables when the specification is finished (because we don't know if all other specifications have been executed), we just remove then when we re-run the specifications. This can actually be a good thing because this will help you investigate failures if any.
The other way to synchronize specifications at a global level, and to properly clean-up at the end, is to use specification links.
With links
With specs2 3.3 you can create dependencies between specification with links. This means that you can define a "Suite" specification which is going to:
start the database
collect all the relevant specifications
delete the database
For example
import org.specs2._
import specification._
import core.Fragments
import runner.SpecificationsFinder
// run this specification with `all` to execute
// all linked specifications
class Database extends Specification { def is =
"All database specifications".title ^ br ^
link(new Create).hide ^
Fragments.foreach(specs)(s => link(s) ^ br) ^
link(new Delete).hide
def specs = specifications(pattern = ".*Db.*")
}
// start the database with this specification
class Create extends Specification { def is = xonly ^
step("create database".pp)
}
// stop the database with this specification
class Delete extends Specification { def is = xonly ^
step("delete database".pp)
}
// an example of a specification using the database
// it will be invoked by the "Database" spec because
// its name matches ".*Db.*"
class Db1Spec extends Specification { def is = s2"""
test $db
"""
def db = { println("use the database - 1"); ok }
}
class Db2Spec extends Specification { def is = s2"""
test $db
"""
def db = { println("use the database - 2"); ok }
}
When you run:
sbt> test-only *Database* -- all
You should see a trace like
create database
use the database - 1
use the database - 2
delete database

Can you dynamically generate Test names for ScalaTest from input data?

I have a number of test data sets that run through the same ScalaTest unit tests. I'd love if each test data set was it's own set of named tests so if one data set fails one of the tests i know exactly which one it was rather than going to a single test and looking on what file it failed. I just can't seem to find a way for the test name to be generated at runtime. I've looked at property and table based testing and currently am using should behave like to share fixtures, but none of these seem to do what I want.
Have I not uncovered the right testing approach in ScalaTest or is this not possible?
You can write dynamic test cases with ScalaTest like Jonathan Chow wrote in his blog here: http://blog.echo.sh/2013/05/12/dynamically-creating-tests-with-scalatest.html
However, I always prefer the WordSpec testing definitions and this also works with dynamic test cases just like Jonathan mentions.
class MyTest extends WordSpec with Matchers {
"My test" should {
Seq(1,2,3) foreach { count =>
s"run test $count" in {
count should be(count)
}
}
}
}
When running this test it run 3 test cases
TestResults
MyTest
My test
run test 1
run test 2
run test 3
ps. You can even do multiple test cases in the same foreach function using the same count variable.
You could write a base test class, and extend it for each data set. Something like this:
case class Person(name: String, age: Int)
abstract class MyTestBase extends WordSpec with Matchers {
def name: String
def dataSet: List[Person]
s"Data set $name" should {
"have no zero-length names" in {
dataSet.foreach { s => s.name should not be empty }
}
}
}
class TheTest extends MyTestBase {
override lazy val name = "Family" // note lazy, otherwise initialization fails
override val dataSet = List(Person("Mom", 53), Person("Dad", 50))
}
Which produces output like this:
TheTests:
Data set Family
- should have no zero-length names
You can use scala string substitutions in your test names. Using behavior functions, something like this would work:
case class Person(name: String, age: Int)
trait PersonBehaviors { this: FlatSpec =>
// or add data set name as a parameter to this function
def personBehavior(person: => Person): Unit = {
behavior of person.name
it should s"have non-negative age: ${person.age}" in {
assert(person.age >= 0)
}
}
}
class TheTest extends FlatSpec with PersonBehaviors {
val person = Person("John", 32)
personBehavior(person)
}
This produces output like this:
TheTest:
John
- should have non-negative age: 32
What about using ScalaTest's clue mechanism so that any test failures can report as a clue which data set was being used?
You can use the withClue construct provided by Assertions,
which is extended by every style trait in ScalaTest, to add
extra information to reports of failed or canceled tests.
See also the documentation on AppendedClues

How to get the context of the running tests in scala-test ? Is there anything like ITestContext as in TestNG to get those information?

I want to know the list of running test cases and manipulate with those information. In TestNG, implementing the onFinish, onStart, etc., methods of ITestListener gives ITestContext to retrieve those information. Is there anything similar to that in scala-test. Suggestions are highly appreciated. Thanks in advance.
Sky's answer is actually looking in the right direction. Mixing in ScalaTest's BeforeAndAfterAll trait gives you access to some contextual information about the suite, such as:
The suite name
The number of tests in the suite
The names of tests in the suite
Tag information for the suite
The information you get is perhaps not as rich as the contextual information you get from TestNG (for example, this trait won't be able to tell you which tests passed/failed in afterAll). Maybe however the information it gives you is good enough for your purposes:
class MyTest extends FunSuite with BeforeAndAfterAll {
override def beforeAll() {
// suiteName will give you the name of the suite
info(suiteName)
// testNames will give you the names of your tests
testNames forEach info(_)
// tags will give you a mapping of test names to tags
tags.keys.forEach(t =>
info(t + " tagged with tags " + tags(t).mkString(",")))
}
...
}
Yes,
Scalatest has BeforeAndAfter trait which has:
before{
//write code here(run before each test cases in a test file )
}
after{
// write code here(run after each test cases in a test file )
}
and an other trait BeforeAndAfterAll which has:
override def afterAll: Unit = {
//write code here(run after all test cases in a test file )
}
override def beforeAll: Unit = {
//write code here(run before all test cases in a test file )
}