Can I create a wrapper around NUnit, MbUnit, xUnit or other testing framework? - nunit

How can I create a wrapper around a testing framework? We still doesn't know which testing framework are going to use, but I need to start writing unit tests. With this question I want to know how can I switch from NUnit to mbUnit, xUnit or even MSTest.

You could create a wrapper - but I think you can utilise your time much better. I'd say pick the simplest one (My personal favourite would be the war-horse NUnit) that fits your needs - the newer frameworks add functionality that help you write more complex test fixtures.
However I value simplicity over "flexibility". In the future, if you find yourself wanting that "cool feature X in Y testing framework", you could either write that particular test fixture using Y. (you could also migrate the entire test fixture to use Y for consistency - but time is always scarce) - Switch between 2 unit testing framework is usually monotonous work (rename attributes) however some might be more work (disclaimer: no flying time with MbUnit)
Your comment however worries me a bit.
Why is the customer deciding the testing framework that you'd use for development - it should be a choice of the development team. The customer wouldn't want you to define product requirements - would he/she ? The quality of tests doesn't depend on the framework used so I don't see how this affects the customer.

You could use an existing wrapper that allows you to run multiple unit test frameworks, so even if you switch frameworks you can still use the old unit tests. For the unit test frameworks you listed, I would recommend taking a look at Gallio.
From http://www.gallio.org/...
At present Gallio can run tests from MbUnit versions 2 and 3, MSTest, NBehave, NUnit, xUnit.Net, csUnit, and RSpec. Gallio provides tool support and integration with AutoCAD, CCNet, MSBuild, NAnt, NCover, Pex, Powershell, Resharper, TestDriven.Net, TypeMock, and Visual Studio Team System.

Related

Best way to test JPA?

I am working on JPA project and I want to have unit tests (although as a database is required, in this case it will be more as integration tests.)
What is the best way to test JPA project? jUnit can do that ? Is there other better way ?
Thank you very much
You have given limited information on the tools/frameworks you are using and a very general question, but I will give a quick answer on the points you raise. These are just pointers however as I believe you need to do a good bit more leg-work in order for you to figure out what is best for your particular project.
Junit allows you to target your class methods with specific parameters and to examine the return values. The returned values maybe an entity that should have certain field at certain values, a list of entities with certain expected field values, exceptions etc., etc. (Whatever you methods are). You can run your test as you introduce new functionality, and re-run them to test for regression as development proceeds. You can easily test edge cases and non-nominal stuff. Getting Junit up and running in Java SE/EE is quite straight forward so that could be a good option for you to get stick-in with testing. It is one of the quicker ways I use to test new functionality.
Spring/MVC – Using an MVC framework can certainly be useful. I have used JSF/Primefaces. But that is principally because the application was to be a JSF application and such development tests gave confidence that the ‘Model’ layer provided what was needed to the rest of the framework. So this provides some confidence in the model/JPA/DB layers (it is certainly nice to see the data that is delivered) but does not provide for flexible, nimble and targeted testing you might expect from Junit.
I think Dbunit might be something to look at when you’ve made some progress with JUnit.
See http://dbunit.sourceforge.net/
DbUnit is a JUnit extension (also usable with Ant) targeted at
database-driven projects that, among other things, puts your database
into a known state between test runs. This is an excellent way to
avoid the myriad of problems that can occur when one test case
corrupts the database and causes subsequent tests to fail or
exacerbate the damage.

MVC 5 Unit tests vs integration tests

I'm currently working a MVC 5 project using Entity Framework 5 (I may switch to 6 soon). I use database first and MySQL with an existing database (with about 40 tables). This project started as a “proof of concept” and now my company decided to go with the software I'm developing. I am struggling with the testing part.
My first idea was to use mostly integration tests. That way I felt that I can test my code and also my underlying database. I created a script that dumps the existing database schema into a “test database” in MySQL. I always start my tests with a clean database with no data and creates/delete a bit of data for each test. The thing is that it takes a fair amount of time when I run my tests (I run my tests very often).
I am thinking of replacing my integration tests with unit tests in order to speed up the time it takes to run them. I would “remove” the test database and only use mocks instead. I have tested a few methods and it seems to works great but I'm wondering:
Do you think mocking my database can “hide” bugs that can occur only when my code is running against a real database? Note that I don’t want to test Entity Framework (I'm sure the fine people from Microsoft did a great job on that), but can my code runs well against mocks and breaks against MySQL ?
Do you think going from integration testing to unit testing is a king of “downgrade”?
Do you think dropping Integration testing and adopting unit testing for speed consideration is ok.
I'm aware that some framework exists that run the tests against an in-memory database (i.e. Effort framework), but I don’t see the advantages of this vs mocking, what am I missing?
I'm aware that this kind of question is prone to “it depends of your needs” kind of responses but I'm sure some may have been through this and can share their knowledge. I'm also aware that in a perfect world I would do both (tests by using mocks and by using database) but I don’t have this kind of time.
As a side question what tool would you recommend for mocking. I was told that “moq” is a good framework but it’s a little bit slow. What do you think?
Do you think mocking my database can “hide” bugs that can occur only when my code is running against a real database? Note that I don’t want to test Entity Framework (I’m sure the fine people from Microsoft did a great job on that), but can my code runs well against mocks and breaks against MySQL ?
Yes, if you only test your code using Mocks, it's very easy for you to have false confidence in your code. When you're mocking the database, what you're doing is saying "I expect these calls to take place". If your code makes those calls, it'll pass the test, but if they're the wrong calls, it won't work in production. At a simple level, if you add / remove a column from your database the database interaction may need to change, but the process of adding/removing the column is hidden from your tests until you update the mocks.
Do you think going from integration testing to unit testing is a king of “downgrade”?
It's not a downgrade, it's different. Unit testing and integration testing have different benefits that in most cases will complement each other.
Do you think dropping Integration testing and adopting unit testing for speed consideration is ok.
Ok is very subjective. I'd say no, however you don't have to run all of your tests all of the time. Most testing frameworks (if not all) allow you to categorise your tests in some way. This allows you to create subsets of your tests, so you could for example have a "DatabaseIntegration" category that you put all of your database integration tests in, or "EndToEnd" for full end to end tests. My preferred approach is to have separate builds. The usual/continuous build that I would run before/after each check-in only runs unit tests. This gives quick feedback and validation that nothing has broken. A less common / daily / overnight build, in addition to running the unit tests, would also run slower / repeatable integration tests. I would also tend to run integration tests for areas that I've been working on before checking in the code if there's a possibility of the code impacting the integration.
I’m aware that some framework exists that run the tests against an in-memory database (i.e. Effort framework), but I don’t see the advantages of this vs mocking, what am I missing?
I haven't used them, so this is speculation. I would imagine the main benefit is that rather than having to simulate the database interaction with mocks, you instead setup the database and measure the post state. The tests become less how you did something and more what data moved. On the face of it, this could lead to less brittle tests, however you're effectively writing integration tests against another data provider that you're not going to use in production. If it's the right thing to do is again, very subjective.
I guess the second benefit is likely to be that you don't necessarily need to refactor your code in order to take advantage of the in memory database. If your code hasn't been constructed to support dependency injection then there is a good chance that you will need to perform some level of refactoring in order to support mocking.
I’m also aware that in a perfect world I would do both (tests by using mocks and by using database) but i don’t have this kind of time.
I don't really understand why you feel this is the case. You've already said that you have integration tests already that you're planning on replacing with unit tests. Unless you need to do major refactoring in order to support the unit-tests your integration tests should still work. You don't usually need as many integration tests as you need unit tests, since the unit tests are there to verify the functionality and the integration tests are there to verify the integration, so the overhead of creating them should be relatively small. Using categorisation to determine which tests you run will reduce the time impact of running your tests.
As a side question what tool would you recommend for mocking. I was told that “moq” is a good framework but it’s a little bit slow. What do you think?
I've used quite a few different mocking libraries and for the most part, they are all very similar. Some things are easier with different frameworks, but without knowing what you're doing it's hard to say if you will notice. If you haven't built your code with dependency injection in mind then you may have find it challenging getting your mocks to where you need them.
Mocking of any kind is generally quite fast, you're usually (unless you're using partial mocks) removing all of the functionality of the class/interface you're mocking so it's going to perform faster than your normal code. The only performance issues I've heard about are if you're MS fakes/shims, sometimes (depending on the complexity of the assembly being faked) it can take a while for the fake assemblies to be created.
The two frameworks I've used that are a bit different are MS fakes/shims and Typemock. The MS version requires a certain level of visual studio, but allows you to generate fake assemblies with shims of certain types of object that means you don't have to pass your mocks from your test through to where they're used. Typemock is a commercial solution that uses the profiling API to inject code while your tests are running which means it can reach parts other mocking frameworks can't. These are both particularly useful if you've got a codebase that hasn't been written with unit testing in mind that can help to bridge the gap.

Matlab moving from XUnit to Matlab 2013 unit testing

As many of you are aware as of the release of MatLab 2013a, xUnit a popular unit testing framework for MatLab is canceling further development.
Is MatLab's new and native unit testing framework comparable to xUnit? what features is it lacking when compared to xUnit? Is it better or worse than xUnit?
MATLAB xUnit has been an excellent contribution to the test focused development efforts of those writing MATLAB code. It has a solid implementation, it follows the xUnit paradigm very well, and has been invaluable as a file exchange contribution.
The MATLAB Unit Test framework has indeed learned from this submission as well as decades of requirements and test focused development for the MathWorks' internal code base. We have also learned and extended upon frameworks in other languages such as JUnit, NUnit, and python's unittest framework. As such there certainly are many more features in the R2013a-beyond framework, and it is designed to scale and extend.
There are too many other features to go into in a simple answer, but perhaps one way to describe some of the differences are that the 13a framework is what I loosely call an "xUnit 2.0" and the file exchange submissions is an "xUnit 1.0" framework. If you are familair with JUnit, this is like the difference between JUnit 3 and JUnit 4.
There are also other intangible or as yet unrealized benefits, such as:
The framework is included directly in MATLAB so you can share tests with others and know that they can run the tests even if they are not familiar with testing and do not want to download the file exchange framework.
The framework is under active development with a pipeline of additional features and capabilities in the works for future releases.
Hope that helps. I would be happy to go over any questions you have about specific functionality or features.
I don't believe MathWorks are planning at all to stop making xUnit available, so you can continue using it if you like. xUnit had not seen any large changes for quite a while in any case, and even though it won't be developed further in terms of features, it may receive an occasional fix if any are needed.
I have tried out the new framework quite a bit, but have not used it on any large projects yet. Previously I have used xUnit on large projects. However, I'm no expert on unit testing - so please read the following opinions in that context.
I'm pretty sure there's nothing you can do in xUnit that you can't do in the new framework. In general it's much more flexible and powerful than xUnit, providing additional features and a better way to organise and structure your tests. It's a lot easier to set up and tear down suites of tests, to manage and close resources (files, figure windows, database connections etc), and to carry out tricky tests such as checking that the right number of arguments are returned.
However, whereas a typical xUnit test was implemented as a fairly simple MATLAB function, tests in the new framework are typically implemented (in 13a, but see below for 13b) as classes using MATLAB's OO syntax, and if you're not comfortable with that it may seem like a big leap.
I should also add that although the documentation for the testing framework is excellent as reference material, I haven't found it to be great as a tutorial.
In 13b, the need to use classes has been offset a bit with the introduction of the functiontests command, which creates a test suite for you from a file containing tests implemented as local functions. That will make things much easier if you're not comfortable with class syntax. But I would think that if you want to take advantage of everything, you'd probably still want to use the main framework.
Hope my experience is of help - if you're lucky, perhaps #AndyCampbell will chime in...

NUnit/TestDriven (product, not methodology), is it possible to ignore a category of tests by default?

We're in the process of defining automated performance tests using NUnit. However, since many of these will run for a bit of time, to cater for inaccuracies in timing and load on the system, we don't want the developers to have to run these under normal development.
Is there any way we can instruct the TestDriven Visual Studio addin to ignore a set of unit tests, so that if the programmer just right-clicks on the unit-test project and selects "Run Tests", those tests are not executed?
It seems the only way to make tests be ignored by default is through the [Explicit] attribute, but that means I incur a maintenance overhead on our test server, since there doesn't seem to be a way to execute all the explicit tests in one fell swoop without naming them all.
Or should I just separate out all the performance tests to its separate project and just instruct the programmers to leave it alone (at least until they need to update the tests) ?
I would create a separate testing assembly. Then you can have your normal unit tests in one assembly and performance tests in a second assembly.
That way, at development time the developers never run the perf tests. At build time you could execute both test assemblies to make sure that both Unit & Perf tests are run.
I would do this not just so that some tests wouldn't have to get run, but it offers a better distinction of what's in that assembly. This new assembly isn't a set of "Unit" tests. And as you bring new devs on, there wouldn't be any confusion to how you guys write tests. You wouldn't want a new guy looking at a perf test and thinking that's how to write unit tests.
As for the concrete question, ask TestDriven to avoid running your performance tests, check the settings dialog under "Tools->Options":
What about the [ignore] attribute? Maybe I am misunderstanding, but it sounds to me like this will do the trick for you??

iOS Tests/Specs TDD/BDD and Integration & Acceptance Testing

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are the best technologies to use for behavior-driven development on the iPhone? And what are some open source example projects that demonstrate sound use of these technologies? Here are some options I've found:
Unit Testing
Test::Unit Style
OCUnit/SenTestingKit as explained in iOS Development Guide: Unit Testing Applications & other OCUnit references.
Examples: iPhoneUnitTests, Three20
CATCH
GHUnit
Google Toolbox for Mac: iPhone Unit Testing
RSpec Style
Kiwi (which also comes with mocking & expectations)
Cedar
Jasmine with UI Automation as shown in dexterous' iOS-Acceptance-Testing specs
Acceptance Testing
Selenium Style
UI Automation (works on device)
UI Automation Instruments Guide
UI Automation reference documentation
Tuneup js - cool library for using with UIAutomation.
Capturing User Interface Actions into Automation Scripts
It's possible to use Cucumber (written in JavaScript) to drive UI Automation. This would be a great open-source project. Then, we could write Gherkin to run UI Automation testing. For now, I'll just write Gherkin as comments.
UPDATE: Zucchini Framework seems to blend Cucumber & UI Automation! :)
Old Blog Posts:
Alex Vollmer's UI Automation tutorial
O'Reilly Answers UI Automation tutorial
Adi Saxena's UI Automation tutorial
UISpec with UISpecRunner
UISpec is open source on Google Code.
UISpec has comprehensive documentation.
FoneMonkey
Cucumber Style
Frank and iCuke (based on the Cucumber meets iPhone talk)
The Frank Google Group has much more activity than the iCuke Google Group.
Frank runs on both device and simulator, while iCuke only runs in simulator.
Frank seems to have a more comprehensive set of step definitions than iCuke's step definitions. And, Frank also has a step definition compendium on their wiki.
I proposed that we merge iCuke & Frank (similar to how Merb & Rails merged) since they have the same common goal: Cucumber for iOS.
KIF (Keep It Functional) by Square
Zucchini Framework uses Cucumber syntax for writing tests and uses CoffeeScript for step definitions.
Additions
OCMock for mocking
OCHamcrest and/or Expecta for expectations
Conclusion
Well, obviously, there's no right answer to this question, but here's what I'm choosing to go with currently:
For unit testing, I used to use OCUnit/SenTestingKit in XCode 4. It's simple & solid. But, I prefer the language of BDD over TDD (Why is RSpec better than Test::Unit?) because our words create our world. So now, I use Kiwi with ARC & Kiwi code completion/autocompletion. I prefer Kiwi over Cedar because it's built on top of OCUnit and comes with RSpec-style matchers & mocks/stubs. UPDATE: I'm now looking into OCMock because, currently, Kiwi doesn't support stubbing toll-free bridged objects.
For acceptance testing, I use UI Automation because it's awesome. It lets you record each test case, making writing tests automatic. Also, Apple develops it, and so it has a promising future. It also works on the device and from Instruments, which allows for other cool features, like showing memory leaks. Unfortunately, with UI Automation, I don't know how to run Objective-C code, but with Frank & iCuke you can. So, I'll just test the lower-level Objective-C stuff with unit tests, or create UIButtons only for the TEST build configuration, which when clicked, will run Objective-C code.
Which solutions do you use?
Related Questions
Is there a BDD solution that presently works well with iOS4 and Xcode4?
SenTestingKit (integrated with XCode) versus GHUnit on XCode 4 for Unit Testing?
Testing asynchronous code on iOS with OCunit
SenTestingKit in Xcode 4: Asynchronous testing?
How does unit testing on the iPhone work?
tl;dr
At Pivotal we wrote Cedar because we use and love Rspec on our Ruby projects. Cedar isn't meant to replace or compete with OCUnit; it's meant to bring the possibility of BDD-style testing to Objective C, just as Rspec pioneered BDD-style testing in Ruby, but hasn't eliminated Test::Unit. Choosing one or the other is largely a matter of style preferences.
In some cases we designed Cedar to overcome some shortcomings in the way OCUnit works for us. Specifically, we wanted to be able to use the debugger in tests, to run tests from the command line and in CI builds, and get useful text output of test results. These things may be more or less useful to you.
Long answer
Deciding between two testing frameworks like Cedar and OCUnit (for example) comes down to two things: preferred style, and ease of use. I'll start with the style, because that's simply a matter of opinion and preference; ease of use tends to be a set of tradeoffs.
Style considerations transcend what technology or language you use. xUnit-style unit testing has been around for far longer than BDD-style testing, but the latter has rapidly gained in popularity, largely due to Rspec.
The primary advantage of xUnit-style testing is its simplicity, and wide adoption (amongst developers who write unit tests); nearly any language you could consider writing code in has an xUnit-style framework available.
BDD-style frameworks tend to have two main differences when compared to xUnit-style: how you structure the test (or specs), and the syntax for writing your assertions. For me, the structural difference is the main differentiator. xUnit tests are one-dimensional, with one setUp method for all tests in a given test class. The classes that we test, however, aren't one-dimensional; we often need to test actions in several different, potentially conflicting, contexts. For example, consider a simple ShoppingCart class, with an addItem: method (for the purposes of this answer I'll use Objective C syntax). The behavior of this method may differ when the cart is empty compared to when the cart contains other items; it may differ if the user has entered a discount code; it may differ if the specified item can't be shipped by the selected shipping method; etc. As these possible conditions intersect with one another you end up with a geometrically increasing number of possible contexts; in xUnit-style testing this often leads to a lot of methods with names like testAddItemWhenCartIsEmptyAndNoDiscountCodeAndShippingMethodApplies. The structure of BDD-style frameworks allows you to organize these conditions individually, which I find makes it easier to make sure I cover all cases, as well as easier to find, change, or add individual conditions. As an example, using Cedar syntax, the method above would look like this:
describe(#"ShoppingCart", ^{
describe(#"addItem:", ^{
describe(#"when the cart is empty", ^{
describe(#"with no discount code", ^{
describe(#"when the shipping method applies to the item", ^{
it(#"should add the item to the cart", ^{
...
});
it(#"should add the full price of the item to the overall price", ^{
...
});
});
describe(#"when the shipping method does not apply to the item", ^{
...
});
});
describe(#"with a discount code", ^{
...
});
});
describe(#"when the cart contains other items, ^{
...
});
});
});
In some cases you'll find contexts in that contain the same sets of assertions, which you can DRY up using shared example contexts.
The second main difference between BDD-style frameworks and xUnit-style frameworks, assertion (or "matcher") syntax, simply makes the style of the specs somewhat nicer; some people really like it, others don't.
That leads to the question of ease of use. In this case, each framework has its pros and cons:
OCUnit has been around much longer than Cedar, and is integrated directly into Xcode. This means it's simple to make a new test target, and, most of the time, getting tests up and running "just works." On the other hand, we found that in some cases, such as running on an iOS device, getting OCUnit tests to work was nigh impossible. Setting up Cedar specs takes some more work than OCUnit tests, since you have get the library and link against it yourself (never a trivial task in Xcode). We're working on making setup easier, and any suggestions are more than welcome.
OCUnit runs tests as part of the build. This means you don't need to run an executable to make your tests run; if any tests fail, your build fails. This makes the process of running tests one step simpler, and test output goes directly into your build output window which makes it easy to see. We chose to have Cedar specs build into an executable which you run separately for a few reasons:
We wanted to be able to use the debugger. You run Cedar specs just like you would run any other executable, so you can use the debugger in the same way.
We wanted easy console logging in tests. You can use NSLog() in OCUnit tests, but the output goes into the build window where you have to unfold the build step in order to read it.
We wanted easy to read test reporting, both on the command line and in Xcode. OCUnit results appear nicely in the build window in Xcode, but building from the command line (or as part of a CI process) results in test output intermingled with lots and lots of other build output. With separate build and run phases Cedar separates the output so the test output is easy to find. The default Cedar test runner copies the standard style of printing "." for each passing spec, "F" for failing specs, etc. Cedar also has the ability to use custom reporter objects, so you can have it output results any way you like, with a little effort.
OCUnit is the official unit testing framework for Objective C, and is supported by Apple. Apple has basically limitless resources, so if they want something done it will get done. And, after all, this is Apple's sandbox we're playing in. The flip side of that coin, however, is that Apple receives on the order of a bajillion support requests and bug reports each day. They're remarkably good about handling them all, but they may not be able to handle issues you report immediately, or at all. Cedar is much newer and less baked than OCUnit, but if you have questions or problems or suggestions send a message to the Cedar mailing list (cedar-discuss#googlegroups.com) and we'll do what we can to help you out. Also, feel free to fork the code from Github (github.com/pivotal/cedar) and add whatever you think is missing. We make our testing frameworks open source for a reason.
Running OCUnit tests on iOS devices can be difficult. Honestly, I haven't tried this for quite some time, so it may have gotten easier, but the last time I tried I simply couldn't get OCUnit tests for any UIKit functionality to work. When we wrote Cedar we made sure that we could test UIKit-dependent code both on the simulator and on devices.
Finally, we wrote Cedar for unit testing, which means it's not really comparable with projects like UISpec. It's been quite a while since I tried using UISpec, but I understood it to be focused primarily on programmatically driving the UI on an iOS device. We specifically decided not to try to have Cedar support these types of specs, since Apple was (at the time) about to announce UIAutomation.
I'm going to have to toss Frank into the acceptance testing mix. This is a fairly new addition but has worked excellent for me so far. Also, it is actually being actively worked on, unlike icuke and the others.
For test driven development, I like to use GHUnit, its a breeze to set up, and works great for debugging too.
Great List!
I found another interesting solution for UI testing iOS applications.
Zucchini Framework
It is based on UIAutomation.
The framework let you write screen centric scenarios in Cucumber like style.
The scenarios can be executed in Simulator and on device from a console (it is CI friendly).
The assertions are screenshot based. Sounds inflexible, but it gets you nice HTML report, with highlighted screen comparison and you can provide masks which define the regions you want to have pixel exact assertion.
Each screen has to be described in CoffeScript and the tool it self is written in ruby.
It is kind of polyglott nightmare, but the tool provides a nice abstraction for UIAutomation and when the screens are described it is manageable even for QA person.
I would choose iCuke for acceptance tests and Cedar for unit tests. UIAutomation is a step in the right direction for Apple, but the tools need better support for continuous integration; automatically running UIAutomation tests with Instruments is currently not possible, for example.
GHUnit is good for unit tests; for integration tests, I've used UISpec with some success (github fork here: https://github.com/drync/UISpec), but am looking forward to trying iCuke, since it promises to be a lightweight setup, and you can use the rails testing goodness, like RSpec and Cucumber.
I currently use specta for rspec like setups and it's partner (as mentioned above) expecta which has ton's of awesome matching options.
I happen to really like OCDSpec2 but I'm biased, I wrote OCDSpec and contribute to the second.
It's very fast even on iOS, in part because it's built from the ground up rather than being put on top of OCUnit. It has an RSpec/Jasmine syntax as well.
https://github.com/ericmeyer/ocdspec2