What is the right way to implement setUpTestData style behaviour in py.test - pytest

I have a bunch of fixture stuff that I want to do once for the test class but I also don't want the associated tests messing with it.
I don't really get the py.test fixture system yet so I'm not seeing how this is supposed to be done.
In vanilla Django this stuff is achieved with setUpTestData which lets you create some fixtures in the DB once for the test class. Then at the start of each test case it drops a transaction savepoint and at the end of each test it resets to that save point. (This is in addition to it transactioning around the entire class so as to leave a clean db at the end.)
I could get this functionality by inheriting off Django's TransactionTestCase but pytest-django seems to want to run without that and I've achieved everything else I need without doing so.

I've looked around for this quite a lot, and the best I could find is this pytest plugin, which I have not tested myself:
https://github.com/tipsi/pytest-tipsi-django
Per the pytest-django issue list, it looks like this has been raised before, but there is no movement towards a fix:
https://github.com/pytest-dev/pytest-django/issues/514

Related

How do I undo a Setup call for a moq Mock?

This might be a special use case that I am dealing with here. Here is what my simple C# NUnit test that uses Moq looks like
Mock<ISomeRepository> mockR = new Mock<ISomeRepository>();
mockR.Setup(x => x.GetSomething).Returns(new Something(a=1,b=2);
--use the mocked repository here
Now later in this same unit test or another test case I want to invoke the real implementation of the method GetSomething() on this mockR object.
Is there a way to do that? My repository is Singleton at its heart. So even if I create a new object, the GetSomething method still returns the same Moq'd object.
That would largely depend on your implementation of that GetSomething, which is something you're not showing here ;). Also, I'm not sure that's even a valid setup, shouldn't there be a .Setup(..).Returns(..) there?
Mocks are used to represent dependencies of a class allowing that class to be tested without using their actual dependencies. Or you can do tests which involve the actual dependencies.
But using a mocked dependency and the real dependency within the same unit test sounds like you're not clear what your test is testing.
If it's another test case, it shouldn't be a problem either. Each test should not impact another, so if you set up the class under test separately that should be fine, even with a singleton.
I'm assuming that you're injecting the singleton dependency. If not, do that.

FitNesse: automatic fixture stub generation

When I write a test in FitNesse I usually write several tables in wiki format first and then write the fixture code afterwards. I do that by executing the test in the wiki server and then create the fixture classes with names I copied from the error messages out of the failed execution of the test page.
This is an annoying process and could be done by an automatic stub generator, that creates the fixture classes with appropriate class names and method names.
Is there already such a generator available?
Not as far as I know. It sounds like you are using Fit, correct?
It sounds like an interesting feature, maybe you can create one as a plugin?

JUnit Fork-Mode in Java Classes

There's support for forkMode in Ant and Maven and occasionally we use it with value perTest. However, the JUnit-tests in Eclipse still fail when we run the tests on a class or on a project (Run As -> JUnit Test). Obviously JUnit uses default settings or behaviour and executes the tests in parallel causing some red crosses in the JUnit-view.
Is there a way to code something into the test-class that lets JUnit behave like the forkMode setting? We don't mind if there's an Eclipse-only solution for this.
Or can this be done with a Run Configuration in Eclipse?
EDIT:
I understand that the problems are based on data remaining after tests and further tests will fail due to that. While this makes sense, please understand that this doesn't answer my question. Think of my situation as being part of some sort of a Tiger Team. We have a bunch of issues and fixing that part of existing tests is just one of them. Trust me, we will try to cover everything... (I haven't heard that in a while)
Eclipse runs the JUnit test serially, in a single thread, in the same JVM. If you have tests that normally operate in parallel, this should not affect the test behavior. However, if you assume that you can change settings in the VM, like system properties, or class static variables, and the next test will not have those changes, that will break your tests.
The rule of thumb is that each test should leave the system (vm, database, filesystem) exactly as it found it so that each test can be run at any time, in any order.

How to run some but not all tests in a Perl test suite in parallel?

I've got a Perl-based test suite with 10,000+ tests that I would like to make run faster. I've tested using the -j flag to prove, and I have found that most-but-not-all of my tests are ready to run in parallel.
While I can work on making the remaining tests to be "parallel friendly", I expect there always be some tests which are not. What's a good way to manage this? I would like for it to be easy to run the whole set of tests efficiently, and make it easy to mark tests as "not-parallel-ready" if I need to.
Here are some options I see:
prove could be patched to support some tests as not-parallel-ready
Jenkins is being used to manage the test suite runs. I could split off the non-parallel tests into their own run. In other words, give up and use two test runs.
Perhaps there is a way to merge two TAP result streams together that I have yet to recover.
I'm not too concerned with how I will manage the list of exceptions. Either I can keep a list in a file as part of the test harness infrastructure, or I could put something in each test header that would mark it as such, and our test harness could determine the list of exceptions dynamically.
( The test suite is partially based on Test::Class, and I'll also be looking at Test::Class::Load to speed it up as well. )
I found a solution. It's in the documentation for aggregate_tests() for TAP::Harness. It includes a code sample for how I could write my own harness for this purpose:
...This is useful, for example, in the case where some tests should
run in parallel but others are unsuitable for parallel execution.
my $formatter = TAP::Formatter::Console->new;
my $ser_harness = TAP::Harness->new( { formatter => $formatter } );
my $par_harness = TAP::Harness->new(
{ formatter => $formatter,
jobs => 9
}
);
my $aggregator = TAP::Parser::Aggregator->new;
$aggregator->start();
$ser_harness->aggregate_tests( $aggregator, #ser_tests );
$par_harness->aggregate_tests( $aggregator, #par_tests );
$aggregator->stop();
$formatter->summary($aggregator);
From there it looks like I could:
Sub-class App::Prove and override _runtests(), which is where the new functionality above could be merged in.
Fork prove so that it calls My::App::Prove instead of App::Prove.
Now that I better understand how the pieces fit together I can see how I might create a patch for prove that would add an option like --exclude-from-parallel FILE, which would allow you to specify a file, which contains a list of test files to be excluded from parallel testing.
UPDATE 2012-08-16: I have a patch for prove now, and have submitted it for review. You can view and comment on the Pull Request. No summary is produced after the run output. It's not clear why.
I've now found the best solution so far to this problem. In turns out that prove has had undocumented support for marking some tests to be run in sequence instead of parallel since 2008. It's backed by a rather fancy "rules" system in TAP::Parser::Scheduler that allows for complex specifications of ordering arrangements for parallel and sequential test runs.
Here's the basic current recipe for prove:
# All tests are allowed to run in parallel, except those starting with "p"
--rules='seq=t/p*.t' --rules='par=**'
I have a new pull request that adds documentation for this feature, and have started a discussion about possibly offering a simpler syntax for basic exceptions as well. See the pull request for details.
I found another solution which advertised this feature, but I could only get trivial cases to work. It's to use Test::Steering. It allows me to do this:
include_tests( { jobs => 4 }, #parallel_tests );
include_tests( #serial_tests );
With this solution, be aware:
Before it actually works, I currently have to patch the code to fix a basic bug with it that has remained unpatched for multiple years.
Additional code is needed to handle generating the list of of parallel and serial tests to run.
I didn't actually get a combined summary for my real-world test... both sections emitted their own summary reports, so it didn't really work. Maybe I missed something, or maybe it's broken.
Test::Parallel also provides an easier way to run some tests in parallel
have a look at the sample from https://metacpan.org/pod/Test::Parallel
Another option: use a rules file for TAP::Harness.
You can build custom rules in a YAML file (called testrules.yml by default). I needed something similar to what you describe, which I was able to do with a testrules.yml file that looked like this:
---
seq:
# tests that are not parallel-ready (will run in isolation)
- seq:
- t/test1.t
- t/test2.t
# tests that can run in parallel
- par:
# wildcard for everything else
- **
In my case, I was using this with code that directly called App::Prove, rather than command-line prove. But I think it would work with prove too?

Is it possible for run NUnit against a specific (long) list of tests

I have a list of several thousand NUnit tests that I want to run (generated automatically by another tool). (This is a subset of all of the tests, and changes frequently)
I'd like to be able to run these via NUnit-Console.exe. Unfortunately the /run option only takes a direct list of files which in my case would not fit on a single command line. I'd like it to pickup the list from a filename.
I appreciate that I could use categories, but the list I want to run changes frequently and so I'd prefer not to have to start changing source code.
Does anyone know if there is a clean way to get NUnit to run my specified tests?
(I could break it down into a series of smaller calls to NUnit-console with a full command line, but that's not very elegant)
(If it's not possible, maybe I should add it as an NUnit feature request.)
Had a reply from Charlie Poole (from NUnit development team), that this is not currently possible but has been added as a feature request for NUnit 2.6
I see what you're saying, but like you say you can run a single fixture from the command line.
nunit-console /fixture:namespace.fixture tests.dll
How about generating all the tests in the same fixture? Or place them all in the same assembly?
nunit-console tests.dll
As mentioned in the nunitLink, we need to mention the scenario/test case name. It simple but it has bit of a trick in it. Directly mentioning the test case name will not serve the purpose and you will end up with the 0 testcases executed. We need to write the exact path for the same.
I don't know how it works for other languages but using c# I have found a solution. Whenever we create a feature file corresponding feature.cs file get's created in Visual Studio. Click on the featureFileName.feature.cs and look for namespace and keep it aside(Part 1)
namespace MMBank.Test.Features
Scroll a bit down you will get the class name. Note that as well and keep it aside(Part 2)
public partial class HistoricalTransactionFeature
Keep scrolling down, you will see the code which nunit understands for execution basically.
[NUnit.Framework.TestAttribute()]
[NUnit.Framework.DescriptionAttribute("TC_1_A B C D")]
[NUnit.Framework.CategoryAttribute("MM_Bank")]
Below the code you can see the function/method name which will most likely be TC_1_ABCD(certain parameters)
public virtual void TC_1_ABCD(string username, string password, string visit)
You will be having multiple such methods based on no. of scenarios you have in your feature file. Note the method(test case) which you want to execute and keep it aside(Part 3)
Now collate all the parts with dots. Finally you will land up with something like this,
MMBank.Test.Features.HistoricalTransactionFeature.TC_1_ABCD
This is it. Similarly you can create the test case names from multiple feature files and stack them up in text file. Every test case name should be in different line. For command you can browse through above nunit link for execution using command prompt.