I already know that we can use marks and then call pytest with -m to execute only certain tests
my question is: is there a way to mark a test so that that test is not executed without adding any -m when calling pytest?
EDIT:
I am thinking something like:
mark the test with a special (I don't know if that exists, that is why this question) mark the test thespecialtest.py as
#pytest.mark.notselect
then running the tests like always: pytest will exclude that test.
If I want to run that test specifically I can do explicitly pytest thescpecialtest.py
I know that the best and easiest way would be just to use -m in calling pytest but I want to ask if there is an option where this would not be necessary
-m option is probably the most comfortable to use for this use case.
However, you can choose what tests to run based on names with -k option, which is basically the same as -m option, but you select test cases based on their names, rather than marks.
Another option is you can change test discovery process, so you can for example tell pytest to collect and execute only functions that comply to a certain name patters, e.g. you add to your pytest.ini:
[pytest]
python_functions = *_check
which tells pytest to collect and execute only functions that comply to this glob pattern. You can do this with classes and files as well.
Related
I'm running into problems testing a new addition to a module. (Specifically - the ~ operator seems to be not working in Math::Complex for this new feature only.) It's too bizarre to be what it appears but the ideal scheme would be to add the -d option on the top line of the .t program.
Well, I was quickly disabused of that idea! It does not invoke the debugger.
If I wanted to use the debugger, I'd need to create an edit of the .t program that:
Uses (the use command) the module directly. not in the form of
BEGIN { use_ok('My::Module') };
Does not "use Test::More;"
A few other edits that cause gluteal pains
The problem with doing that is that any changes I make in the edited test program I still need to transfer back to the true test program use in "make test". Error prone as best.
I am already using "make test TEST_VERBOSE=1" so that my stdio output shows up. But there's GOT to be a simpler way to invoke the debugger on the .t
Thanks for ideas here.
-- JS
use_ok tests are great, but you should have them in test files of their own, not test files that also test other things.
I'm not sure why you would need to avoid Test::More or use_ok to run the debugger, though. What does happen when you try your test directly:
perl -d -Mblib t/yourtestfile.t?
If all else fails, you can try using Enbugger in your test script.
I have a bunch of perl tests:
Functional Tests
Mechanize Tests
Actual Unit Tests, asserting on functions and return values, etc
Tests that involve external Services, like WebServices
DB centric tests
REST tests
I run all of them via prove and theoretically re-arrange them into various directories and run something like find t/ -name '*.t' ! -name '*timeout*' | xargs prove -l, but gets very difficult (and not good engineering) to name tests a particular way, so we can parse them via find.
Is there a way we can pass a wildcard list of tests to prove when we run it via command line?
If not, is there a more sane approach than what we're currently using?
The usual way to to this is via environment variables. The test file checks whether it's supposed to run, and if it's not does a quick skip_all.
For example:
use strict;
use warnings;
use Test::More;
BEGIN {
plan skip_all => "not running extended tests" unless $ENV{EXTENDED_TESTING};
};
# your slow tests here
done_testing();
Now usually that test will be skipped. But if you set the EXTENDED_TESTING environment variable to "1", it will run.
Standard environment variables include EXTENDED_TESTING, RELEASE_TESTING, and NONINTERACTIVE_TESTING. NO_NETWORK_TESTING is also catching on.
There are various modules to automate this such as Test::Is which allows the simpler syntax:
use strict;
use warnings;
use Test::More;
use Test::Is "extended";
# your slow tests here
done_testing();
If you have some other application-specific categories, you'll have to invent some environment variables yourself. If you think they seem generically useful, blog about them, and maybe they'll catch on and become standard environment variables too.
i think i found the answer as Test::Less
Test::Less - Test Categorization and Subset Execution
test-less normally keeps the index file of mappings between tags and test files, in a file called t/Test-Less/index.txt. You can override this with the --file option or the TEST_LESS_INDEX environment variable.
Tags are strings matching /^[\w\-]+$/.
The -list and -prove commands take what is called a tag specification.
A specication is a a list of tags and possibly file names.
test-less -prove foo bar baz
Runs all the foo tests, bar tests and baz tests.
test-less -prove foo,bar,baz
Even after i was able to fix the compile bug in Test::Less, i was still unable to run any tests using Test::Less which has been broken since 2009. So looking at Test::Class might be the answer:
http://search.cpan.org/~ether/Test-Class-0.46/lib/Test/Class.pm
Sometimes you just want to run a single test. Commenting out other tests or writing code to skip them can be a hassle, so you can specify the TEST_METHOD environment variable. The value is expected to be a valid regular expression and, if present, only runs test methods whose names match the regular expression. Startup, setup, teardown and shutdown tests will still be run.
One easy way of doing this is by specifying the environment variable before the runtests method is called.
Running a test named customer_profile:
#! /usr/bin/perl
use Example::Test;
$ENV{TEST_METHOD} = 'customer_profile';
Test::Class->runtests;
Running all tests with customer in their name:
#! /usr/bin/perl
use Example::Test;
$ENV{TEST_METHOD} = '.*customer.*';
Test::Class->runtests;
I have a test-suit as usual for Perl projects, containing a lib and a t directory. The tests in t are structured through subdirectories. So I run them using:
prove -Ilib -r t/
So far nothing special, and afaik quite a standard way of testing in Perl.
Since it is the assumption, that this is the standard way of testing, I'd like to make sure that the following applies:
"If you run prove -r on t, you have tested everything that is there to test".
This is very important, since otherwise you can never be sure that you really called all the tests and the stuff is fine. Somebody calling the above would then maybe - not knowing so - just call a part of the available tests, leaving some behind. Quite annoying... tests that are not run, are of no help. It should be as easy and predictive as possible for developers to call all the tests! It is a bad thing when you have to look up how to run the rest of the test-suit. You might not know about it, or might not do it anyway.
So here comes my problem: I have to integrate some Tests using pgTAP which kindly provides the tool pg_prove. Now I have to make two commandos to do the testing. Additionally to running prove -Ilib -r I also have to run something like e.g. pg_prove -S schema=customerX -U dbuser -d dbname t/pgTAP/*.sql. The problem is not that big if you call the tests automatically from cron or what ever. But it really decreases the chance that we lazy developers run all test tests during our busy days.
So I wonder what would be the best approach to implement the tests in such a way that prove will also include those tests. Is it, that I have to create some .t-files which wrap the whole thing (and how?)? Are there any tricks I can do with the whole Harness stuff on CPAN? Would a simple test_all.sh in the root-dir, including both commandos, do the best job, even if it breaks the assumptions I made above?
So my question in short is: Can I run all tests, including pgTAP with prove? If not, is there a best practice for solving my problem?
Thanks a lot.
Yes. In fact, pg_prove just passes everything off to prove. Assuming your pgTAP tests end in .sql, you can run all your tests like this:
prove -lr --ext .sql --ext .t \
--source pgTAP \
--pgtap-option dbname=dbname \
--pgtap-option username=dbuser \
--pgtap-option suffix=.pg \
--pgtap-option set=schema=customerX
If you use Module::Build, you can also have ./Build test run all the tests, too, as I've done for circle.
See the TAP::Parser::SourceHandler::pgTAP documentation for details.
Can py.test supports multiple -k options?
Each testcase belongs to a particular group such as _eventnotification or _interface, etc.
Is it possible to run test cases that belong to either one or both at the same time?
ie, run testcases that has _eventnotification or _interface in the name at the same time.
I tried the following and only the testcases with _interface were executed.
If that is not supported, is there another way to do this?
py.test -k "_eventnotification" -k "_interface"
The bad news: pytest-2.3.3 does not support it.
The good news: i took your question as an opportunity to finally enhance "-k" behaviour, so that you can use "not", "or", "end" etc, see the [extended -k example][1]. It works now like "-m" except that it matches on (substrings of) test names, not markers. You can use this in-development pytest version with "pip install -i http://pypi.testrun.org -U pytest".
I have a number of test suites that make use of the Test::Unit::TestCase package and since it is a parallel environment I need to modify them to ensure that one TestSuite does not start until another has completed. Based upon the documentation it appears that there is a way to control the order of the suite execution:
If you need to specify the test order, you can do one of the
following:
Provide a suite() method which returns a Test::Unit::TestSuite.
However, there don't appear to be any examples of how to do this. Is this actually possible and if so how should it be done?
Tests are insightful.
$ cpanm --look Test::Unit::TestCase
$ ack -l 'sub suite' t
t/tlib/SuiteTest.pm
t/tlib/AssertTest.pm
t/tlib/AllTests.pm