Is there a way to define a mark in a PyTest fixture?
I am trying to disable slow tests when I specify -m "not slow" in pytest.
I have been able to disable individual tests, but not a fixture that I use for multiple tests.
My fixture code looks like this:
#pytest.fixture()
#pytest.mark.slow
def postgres():
# get a postgres connection (or something else that uses a slow resource)
yield conn
and several tests have this general form:
def test_run_my_query(postgres):
# Use my postgres connection to insert test data, then run a test
assert ...
I found the following comment within https://docs.pytest.org/en/latest/mark.html (updated link):
"Marks can only be applied to tests, having no effect on fixtures." Is the reason for this comment that fixtures are essentially function calls and marks can only be specified at compile time?
Is there a way to specify that all tests using a specific fixture (postgres in this case) can be marked as slow without specifying #pytest.mark.slow on each test?
It seems you already found the answer in the docs. Subscribe to https://github.com/pytest-dev/pytest/issues/1368 for watching this feature, it might be added in a later pytest version.
For now, you can sort of do a hack to workaround:
# in conftest.py
def pytest_collection_modifyitems(items):
for item in items:
if 'postgres' in getattr(item, 'fixturenames', ()):
item.add_marker("slow")
Related
I'm writting a Brownie test like below:
from brownie import accounts
class Test1:
my_account = accounts\[0\]
def test_fn:
...
The test result says "my_account = accounts[0], list index out of range"
But if I put "my_account = accounts[0]" inside test_fn like below, then the test runs fine.
from brownie import accounts
class Test1:
def test_fn:
my_account = accounts\[0\]
...
Why is that? what's the pytest scope for imported variables?
Tried searching anything related to pytest variable scope, but none suit my question.
Can not reproduce your example because I do not have any accounts in both cases.
However, I think your issue is happening because this accounts variable must be filled with values according to account management page in brownie's docs.
Class properties definition is executed during collection stage. Tests are executed after collection stage is finished. So if accounts in your case are filled somewhere else in code (e.g. in autouse fixture or other test) it will not be accessible during pytest's collection stage.
May be you can provide some more details about your case. How are you generating this accounts in your successful case?
UPD:
According to eth-brownie package source code using brownie test is executing pytest with enabled pytest-brownie plugin like this:
pytest.main(pytest_args, ["pytest-brownie"])
So this plugin declares some hooks.
One of them is pytest_collection_finish which is declared in PytestBrownieRunner class. This class is used as plugin constructor during 1 thread test execution. According to pytest docs, this hook is called after all test are collected.
This hook executes following code:
if not outcome.get_result() and session.items and not brownie.network.is_connected():
brownie.network.connect(CONFIG.argv["network"])
I believe here it is adding information about your configured network including accounts.
So here is the difference:
When you try to reach accounts during tests - code above has been already executed.
However, when you try to reach accounts during class definition - no hooks have been executed yet, so there is no information about your network.
May be I am wrong but I assume your issue is related to order of pytest execution stages.
I have a class which extends org.scalatest.junit.JUnitSuite. This class has a couple of tests. I do not want these tests to run in parallel.
I know how simple it is with Specs2 (extend the class with Specification and add a single line sequential inside the class) as shown here: How to run specifications sequentially.
I do not want to alter the Build file by setting:
parallelExecution in Test := false
nor I want to use tags to run specific test files sequentially.
All I want is a way to make sure that all tests inside my class run sequentially. Is this possible with ScalaTest ? Any sample test/template is appreciated.
A quick google search pointed me to this: http://doc.scalatest.org/2.0/index.html#org.scalatest.Sequential
Just for the couple of tests I have, I think it is a total overkill to create StepSuites. I am not completely sure if that's the way to go about with my case!
The doc for org.scalatest.ParallelTestExecution says
ScalaTest's normal approach for running suites of tests in parallel is to run different suites in parallel, but the tests of any one suite sequentially.
So it looks like you don't have to do anything to get what you want, if your tests are in a single suite.
As a small part of a much larger set of tests, I have a suite of test functions I want to run on each of a list of of objects. Basically, I have a set of plugins, and a set of "plugin tests".
Naively, I can just make a list of test functions that take a plugin argument, and a list of plugins, and have a test where I call all of the former on all of the latter. But ideally, each test/plugin combo would appear as an individual test in the results.
Is there already a nicer/standardized way of doing something like this in pytest?
Check out pytest's documentation on parametrization (https://pytest.org/latest/parametrize.html).
It's a mechanism for running the same test a number of times with different parameters -- it sounds like just what you want. It generates tests that run individually, and they have nice output and reporting.
I am trying to do some integration testing in a Play! 2 for Scala application. For this, I need to load some fixtures to have the DB in a known state before each test.
At the moment, I am just invoking a method that executes a bunch of Squeryl statements to load data. But declaring the fixtures declaratively, either with a Scala DSL or in a language like JSON or YAML is more readable and easy to mantain.
In this example of a Java application I see that fixtures are loaded from a YAML file, but the equivalent Scala app resorts to manula loading, as I am doing right now.
I have also found this project which is not very well documented, and it seems a bit more complex than I'd like - it is not even clear to me where the fixture data is actually declared.
Are there any other options to load fixtures in a Play! application?
Use Evolutions. Write a setup and teardown script for the fixtures in SQL, or use mysqldump (or equivalent for your DB) to export an existing test DB as sql.
http://www.playframework.org/documentation/1.2/evolutions
I find the most stress-free way to do testing is to set everything up in an in-memory database which means tests run fast and drive the tests from Java using JUnit. I use H2DB, but there are a few gotchas you need to watch out for. I learned these the hard way, so this should save you some time.
Play has a nice system for setting up and tearing down your application for integration testing, using running( FakeAplication() ) { .. }, and you can configure it to use an in memory database with FakeApplication(additionalConfiguration = inMemoryDatabase()) see:
http://www.playframework.org/documentation/2.0/ScalaTest
OutOfMemory errors: However, running a sizeable test fixture a few times on my machine caused OutOfMemory errors. This seems to be because the default implementation of the inMemoryDatabase() function creates a new randomly named database and doesn't clean up the old ones between test runs. This isn't necessary if you've written your evolution teardown scripts correctly, because the database will be emptied out and refilled between each test. So we overrode this behaviour to use the same database and the memory issues disappeared.
DB Dialect: Another issue is that our production database is MySQL which has a number of incompatibilities with H2DB. H2DB has compatibility modes for a number of dbs, which should reduce the number of problems you have:
http://www.h2database.com/html/features.html#compatibility
Putting this all together makes it a little unwieldy to add before each test, so I extracted it into a function:
def memDB[T](code: =>T) =
running( FakeApplication( additionalConfiguration = Map(
"db.default.driver" -> "org.h2.Driver",
"db.default.url" -> "jdbc:h2:mem:test;MODE=MySQL"
) ) )(code)
You can then use it like so (specs example):
"My app" should {
"integrate nicely" in memDB {
.....
}
}
Every test will start a fake application, run your fixture setup evolutions script, run the test, then tear it all down again. Good luck!
Why not use the java example in Scala? That exact code should also work without modifications in Scala...
I've got a Perl-based test suite with 10,000+ tests that I would like to make run faster. I've tested using the -j flag to prove, and I have found that most-but-not-all of my tests are ready to run in parallel.
While I can work on making the remaining tests to be "parallel friendly", I expect there always be some tests which are not. What's a good way to manage this? I would like for it to be easy to run the whole set of tests efficiently, and make it easy to mark tests as "not-parallel-ready" if I need to.
Here are some options I see:
prove could be patched to support some tests as not-parallel-ready
Jenkins is being used to manage the test suite runs. I could split off the non-parallel tests into their own run. In other words, give up and use two test runs.
Perhaps there is a way to merge two TAP result streams together that I have yet to recover.
I'm not too concerned with how I will manage the list of exceptions. Either I can keep a list in a file as part of the test harness infrastructure, or I could put something in each test header that would mark it as such, and our test harness could determine the list of exceptions dynamically.
( The test suite is partially based on Test::Class, and I'll also be looking at Test::Class::Load to speed it up as well. )
I found a solution. It's in the documentation for aggregate_tests() for TAP::Harness. It includes a code sample for how I could write my own harness for this purpose:
...This is useful, for example, in the case where some tests should
run in parallel but others are unsuitable for parallel execution.
my $formatter = TAP::Formatter::Console->new;
my $ser_harness = TAP::Harness->new( { formatter => $formatter } );
my $par_harness = TAP::Harness->new(
{ formatter => $formatter,
jobs => 9
}
);
my $aggregator = TAP::Parser::Aggregator->new;
$aggregator->start();
$ser_harness->aggregate_tests( $aggregator, #ser_tests );
$par_harness->aggregate_tests( $aggregator, #par_tests );
$aggregator->stop();
$formatter->summary($aggregator);
From there it looks like I could:
Sub-class App::Prove and override _runtests(), which is where the new functionality above could be merged in.
Fork prove so that it calls My::App::Prove instead of App::Prove.
Now that I better understand how the pieces fit together I can see how I might create a patch for prove that would add an option like --exclude-from-parallel FILE, which would allow you to specify a file, which contains a list of test files to be excluded from parallel testing.
UPDATE 2012-08-16: I have a patch for prove now, and have submitted it for review. You can view and comment on the Pull Request. No summary is produced after the run output. It's not clear why.
I've now found the best solution so far to this problem. In turns out that prove has had undocumented support for marking some tests to be run in sequence instead of parallel since 2008. It's backed by a rather fancy "rules" system in TAP::Parser::Scheduler that allows for complex specifications of ordering arrangements for parallel and sequential test runs.
Here's the basic current recipe for prove:
# All tests are allowed to run in parallel, except those starting with "p"
--rules='seq=t/p*.t' --rules='par=**'
I have a new pull request that adds documentation for this feature, and have started a discussion about possibly offering a simpler syntax for basic exceptions as well. See the pull request for details.
I found another solution which advertised this feature, but I could only get trivial cases to work. It's to use Test::Steering. It allows me to do this:
include_tests( { jobs => 4 }, #parallel_tests );
include_tests( #serial_tests );
With this solution, be aware:
Before it actually works, I currently have to patch the code to fix a basic bug with it that has remained unpatched for multiple years.
Additional code is needed to handle generating the list of of parallel and serial tests to run.
I didn't actually get a combined summary for my real-world test... both sections emitted their own summary reports, so it didn't really work. Maybe I missed something, or maybe it's broken.
Test::Parallel also provides an easier way to run some tests in parallel
have a look at the sample from https://metacpan.org/pod/Test::Parallel
Another option: use a rules file for TAP::Harness.
You can build custom rules in a YAML file (called testrules.yml by default). I needed something similar to what you describe, which I was able to do with a testrules.yml file that looked like this:
---
seq:
# tests that are not parallel-ready (will run in isolation)
- seq:
- t/test1.t
- t/test2.t
# tests that can run in parallel
- par:
# wildcard for everything else
- **
In my case, I was using this with code that directly called App::Prove, rather than command-line prove. But I think it would work with prove too?