With Junit 4 I am seeing that there are two ways of getting fixtures setup/teardown:
Decorate setup/tearDown methods with #Before and #After respectively
Extend TestCase and override setUp/tearDown
I am trying to understand why two different approaches to doing the same thing. Is it related to option 1 being introduced after Java 5 annotations while TestCase is legacy from pre Java 5? Or perhaps I am confusing the two.
Any help will be appreciated.
The two approaches , as stated in this answer is because of what is introduced in Junit3 and Junit4. It also talks about why taking annotation way is better and its advantages.
If some of your testcase are wriiten in Junit3 way and now you want to use Junit4 but wondering how this can be done , then please have a look at this. It is explained there how tests, that are written old way, can be run with Junit4
Related
Why junit5 (actually > v4.9) does not support descriptions in Assumption class methods? It was very useful feature for fast debug. What is an idea of this removal?
What is an idea of this removal?
This functionality has not been removed.
On the contrary, it never existed in JUnit 4.9. Rather, it was not introduced until JUnit 4.11, and it has remained in place ever since then.
If it appears that those methods have been removed, the only viable explanation is that you downgraded your JUnit 4.x version to something prior to JUnit 4.11.
Regarding JUnit 5: for each method in Assumptions, there are two variants that accept messages (what you call descriptions). The messages are always the last argument in JUnit Jupiter. For example, the assumeTrue() method has the following two variants that accept a String or a Supplier<String>.
org.junit.jupiter.api.Assumptions.assumeTrue(boolean, String)
org.junit.jupiter.api.Assumptions.assumeTrue(boolean, Supplier<String>)
I've used lettuce for python in the past. It is a simple BDD framework where specs are written in an external plain text file. Implementation uses regex to identify each step, proving reusable code for each sentence in the specification.
Using scala, either with specs2 or scalatest I'm being forced to write the the specification alongside the implementation, making it impossible to reuse the implementation in another test (sure, we could implement it in a function somewhere) and making it impossible to separate the test implementation from the specification itself (something that I used to do, providing acceptance tests to clients for validation).
Concluding, I raise my question: Considering the importance of validating tests by clients, is there a way in BDD frameworks for scala to load the tests from an external file, raising an exception if a sentence in the test is not implemented yet and executing the test normally if all sentences have been implemented?
I've just discovered a cucumber plugin for sbt. Tests would be implemented under test/scala and specifications would be kept in test/resources as plain txt files. I'm just not sure on how reliable the library is and if it will have support in the future.
Edit:
The above is a wrapper for the following plugin wich solves perfectly the problem and supports Scala.
https://github.com/cucumber/cucumber-jvm
This is all about trade-offs. The cucumber-style of specifications is great because it is pure text, that easily editable and readable by non-coders.
However they are also pretty rigid as specifications because they impose a strict format based on features and Given-When-Then. In specs2 for example we can write any text we want and annotate only the lines which are meant to be actions on the system or verification. The drawback is that the text becomes annotated and that pending must be explicitly specified to indicate what hasn't been implemented yet. Also the annotation is just a reference to some code, living somewhere, and you can of course use the usual programming techniques to get reusability.
BTW, the link above is an interesting example of trade-off: in this file, the first spec is "uglier" but there are more compile-time checks that the When step uses the information from a Given step or that we don't have a sequence of Then -> When steps. The second specification is nicer but also more error-prone.
Then there is the issue of maintaining the regular expressions. If there is a strict separation between the people writing the features and the people implementing them, then it's very easy to break the implementation even if nothing substantial changes.
Finally, there is the question of version control. Who owns the document? How can we be sure that the code is in sync with the spec? Who refactors the specification when required?
There is no, by far, perfect solution. My own conclusion is that BDD artifacts should be in the hand of developers and verified by the other stakeholders, reading the code directly if it's readable or reading an html/pdf output. And if the BDD artifacts are owned by developers they might as well use their own tools to make their life easier with verification (using a compiler when possible) and maintenance (using automated refactorings).
You said yourself that it is easy to make the implementation reusable by the normal methods Scala provides for this kind of stuf (methods, functions, traits, classes, types ...), so there isn't really a problem there.
If you want to give a version without code to your customer, you can still give them the code files, and if they can't ignore a little syntax, you probably could write a custom reporter writing all the text out to a file, maybe even formatted with as html or something.
Another option would be to use JBehave or any other JVM based framework, they should work with Scala without a problem.
Eric's main design criteria was sustainability of executable specification development (through refactoring) and not initial convenience due to "beauty" of simple text.
see http://etorreborre.github.io/specs2/
The features of specs2 are:
Concurrent execution of examples by default
ScalaCheck properties
Mocks with Mockito
Data tables
AutoExamples, where the source code is extracted to describe the example
A rich library of matchers
Easy to create and compose
Usable with must and should
Returning "functional" results or throwing exceptions
Reusable outside of specs2 (in JUnit tests for example)
Forms for writing Fitnesse-like specifications (with Markdown markup)
Html reporting to create documentation for acceptance tests, to create a User Guide
Snippets for documenting APIs with always up-to-date code
Integration with sbt and JUnit tools (maven, IDEs,...)
Specs2 is quite impressive in both design and implementation.
If you look closely you will see the DSL can be extended while you keep the typesafe-ty and strong command of domain code under development.
He who leaves aside the "is more ugly" argument and tries this seriously will find power.
Checkout the structured forms and snippets
I've started using Akka with Scala to develop a set of interacting components in a bus-oriented architecture. I need to test the fault-tolerance of the system, and for that I was wondering if there is any way to use a probabilistic model of failure (i.e., set some failure parameters for each Actor) within a Scala test framework. Any ideas? Any framework out there that already implements this?
I assume you know thinks like Testkit and read the documentation at http://akka.io/docs/akka/1.3/scala/testing.html#akka-testkit (see also http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html )
You don't need Akka in the test setup, if I understood your problem right. Assume that Akka itself is tested and works OK. Now you only have to test your code. Since you didn't show code it's hard to give advice, but I will try:
you can test your method calls in different sequences, and assert the results. I would hardcode the sequences, but you can also randomize that.
show some code and I will clarify what I mean. I also could be wrong, if I understood your question wrong.
There are three ways to organize unit tests: Test per Fixture, Class or Feature. But NUnit attribute for TestClass is called TestFixture. Are there any historical reasons for that?
I respect Mike Two's response, but I would assert that the NUnit team got this very wrong, and the use of [TestFixture] is a semantic wart on the face of NUnit. A test class is not a fixture. From what I've dug into with regard to JUnit, I have not found any reference to a test class as a test fixture, nor have I found much discussion about "test fixtures" referring to test classes. Rather, all the JUnit/xUnit discussion about fixtures pertain to setup and teardown, which, of course, are the common methods used to set up actual test fixtures.
Note that in NUnit 2.5, you can remove the [TestFixture] annotation.
Update: (July 2012)
I was just reading the Cucumber Book and on page 99, author Matt Wynne explains the origin of using "fixture." I quote:
There is a long tradition (coming from the hardware world, where test fixtures originated) of calling the link between the test system and the system under test a fixture. This is the "glue code" role that we've referred to in this book as automation code. The FIT testing framework uses this meaning of the term.
Some unit testing tools (such as NUnit) have further confused the issue by referring to the test case class itself as a fixture. So much for a ubiquitous language! (Wynne & Hellesoy, 2012)
The main historical reason is that NUnit started life as a straight port from JUnit and junit called it test fixture.
NUnit 1.0 was before my time but I've been told it started out by renaming all of the .java files in JUnit to .cs files and trying to compile. It was fixed up from there and a UI was added. When I joined on for NUnit 2.0 there was still a method in NUnit 1.0 called IsVisualAgeForJava since JUnit had special behavior for that at the time.
In NUnit 2.0 our aim was to make NUnit more .NETish. So we added the attributes and a bunch of other stuff. All of us came from java backgrounds and had worked with JUnit for years. It seemed quite natural to use [TestFixture].
Now that you ask about it, I just looked it up.
A test fixture is the fixed baseline state that must be established before the tests are run, such that the results are predictable and repeatable. In unit testing frameworks, we use the SetUp and TearDown attributes/methods to create/destroy the test fixture (e.g. initialize instance variables with the right objects).
I am trying to run my test case using junit 4.X runner, but it is treated like 3.x one when I extend from junit.framework.TestCase.
I had to this as the out current test framework base test class extends from junit.framework.TestCase.
So what is your question?
Why does it work that way? As having JUnit 3 style tests running in a JUnit 4 way could easily lead to confusion. Mixing and matching in the test class is a bad idea, personally I think that JUnit (or a 3rd party tool) should display a warning if you mix them in a class.
The end result is that you need to either write the test class as JUnit 4 or JUnit 3 style test class. If you are tied to JUnit 3 due to your own test classes then look at refactoring them. If you are tied to JUnit 3 due to 3rd party tools then look at upgrading that tool.