I have a number of test suites that make use of the Test::Unit::TestCase package and since it is a parallel environment I need to modify them to ensure that one TestSuite does not start until another has completed. Based upon the documentation it appears that there is a way to control the order of the suite execution:
If you need to specify the test order, you can do one of the
following:
Provide a suite() method which returns a Test::Unit::TestSuite.
However, there don't appear to be any examples of how to do this. Is this actually possible and if so how should it be done?
Tests are insightful.
$ cpanm --look Test::Unit::TestCase
$ ack -l 'sub suite' t
t/tlib/SuiteTest.pm
t/tlib/AssertTest.pm
t/tlib/AllTests.pm
Related
I already know that we can use marks and then call pytest with -m to execute only certain tests
my question is: is there a way to mark a test so that that test is not executed without adding any -m when calling pytest?
EDIT:
I am thinking something like:
mark the test with a special (I don't know if that exists, that is why this question) mark the test thespecialtest.py as
#pytest.mark.notselect
then running the tests like always: pytest will exclude that test.
If I want to run that test specifically I can do explicitly pytest thescpecialtest.py
I know that the best and easiest way would be just to use -m in calling pytest but I want to ask if there is an option where this would not be necessary
-m option is probably the most comfortable to use for this use case.
However, you can choose what tests to run based on names with -k option, which is basically the same as -m option, but you select test cases based on their names, rather than marks.
Another option is you can change test discovery process, so you can for example tell pytest to collect and execute only functions that comply to a certain name patters, e.g. you add to your pytest.ini:
[pytest]
python_functions = *_check
which tells pytest to collect and execute only functions that comply to this glob pattern. You can do this with classes and files as well.
This issue usually is encountered when trying to run make test and sees one test fails. The README describes one can run each test individually, didn't clearly specifies how to do so.
make test uses the script called TEST in the test directory (t). To replicate make test of a single file, one would use this script as follows:
[.../perl/t]$ ./perl -I../lib TEST op/array.t
t/op/array ... ok
All tests successful.
Elapsed: 0 sec
u=0.01 s=0.00 cu=0.03 cs=0.02 scripts=1 tests=194
If you want to see the raw output of the test script, you can run perl as follows:
[.../perl/t]$ ./perl -I../lib op/array.t
1..194
ok 1
ok 2
ok 3
...
ok 192 - holes passed to sub do not lose their position (multideref, mg)
ok 193 - holes passed to sub do not lose their position (aelem)
ok 194 - holes passed to sub do not lose their position (aelem, mg)
The above information and more is found in perlhack.
This document explains how Perl development works. It includes details about the Perl 5 Porters email list, the Perl repository, the Perlbug bug tracker, patch guidelines, and commentary on Perl development philosophy.
Note that you need to run make test_prep before the above commands work. (If you've run make test, you've effectively run make test_prep already.)
Run ./perl harness ../foo/boo.t in the t directory, with foo/boo the name of the failing test.
To run a single test script, use perl, or better, prove. Assuming you are in the module's base directory:
prove -lv t/some-test-script.t
This will run the test script against the libraries in ./lib, with fallback to the libraries available to your install of Perl.
If you want to use the build libraries built by make, then this:
prove -bv t/some-test-script.t
Now the test script will be run against the libraries in ./blib, falling back to libraries installed for your Perl.
The test scripts are typically just Perl scripts that live in a t/ or xt/ or some similar path within the distribution's directory structure. So you can also run them just with Perl:
perl -Iblib t/some-test-script.t
But prove produces nicer test summary information and color coding.
That is about as granular as you can get unless tests are written to allow for targeting specific segments within a test script. If you need to target a specific test within a test script you'll usually have to dig into the test code itself.
I'm running into problems testing a new addition to a module. (Specifically - the ~ operator seems to be not working in Math::Complex for this new feature only.) It's too bizarre to be what it appears but the ideal scheme would be to add the -d option on the top line of the .t program.
Well, I was quickly disabused of that idea! It does not invoke the debugger.
If I wanted to use the debugger, I'd need to create an edit of the .t program that:
Uses (the use command) the module directly. not in the form of
BEGIN { use_ok('My::Module') };
Does not "use Test::More;"
A few other edits that cause gluteal pains
The problem with doing that is that any changes I make in the edited test program I still need to transfer back to the true test program use in "make test". Error prone as best.
I am already using "make test TEST_VERBOSE=1" so that my stdio output shows up. But there's GOT to be a simpler way to invoke the debugger on the .t
Thanks for ideas here.
-- JS
use_ok tests are great, but you should have them in test files of their own, not test files that also test other things.
I'm not sure why you would need to avoid Test::More or use_ok to run the debugger, though. What does happen when you try your test directly:
perl -d -Mblib t/yourtestfile.t?
If all else fails, you can try using Enbugger in your test script.
I have a bunch of perl tests:
Functional Tests
Mechanize Tests
Actual Unit Tests, asserting on functions and return values, etc
Tests that involve external Services, like WebServices
DB centric tests
REST tests
I run all of them via prove and theoretically re-arrange them into various directories and run something like find t/ -name '*.t' ! -name '*timeout*' | xargs prove -l, but gets very difficult (and not good engineering) to name tests a particular way, so we can parse them via find.
Is there a way we can pass a wildcard list of tests to prove when we run it via command line?
If not, is there a more sane approach than what we're currently using?
The usual way to to this is via environment variables. The test file checks whether it's supposed to run, and if it's not does a quick skip_all.
For example:
use strict;
use warnings;
use Test::More;
BEGIN {
plan skip_all => "not running extended tests" unless $ENV{EXTENDED_TESTING};
};
# your slow tests here
done_testing();
Now usually that test will be skipped. But if you set the EXTENDED_TESTING environment variable to "1", it will run.
Standard environment variables include EXTENDED_TESTING, RELEASE_TESTING, and NONINTERACTIVE_TESTING. NO_NETWORK_TESTING is also catching on.
There are various modules to automate this such as Test::Is which allows the simpler syntax:
use strict;
use warnings;
use Test::More;
use Test::Is "extended";
# your slow tests here
done_testing();
If you have some other application-specific categories, you'll have to invent some environment variables yourself. If you think they seem generically useful, blog about them, and maybe they'll catch on and become standard environment variables too.
i think i found the answer as Test::Less
Test::Less - Test Categorization and Subset Execution
test-less normally keeps the index file of mappings between tags and test files, in a file called t/Test-Less/index.txt. You can override this with the --file option or the TEST_LESS_INDEX environment variable.
Tags are strings matching /^[\w\-]+$/.
The -list and -prove commands take what is called a tag specification.
A specication is a a list of tags and possibly file names.
test-less -prove foo bar baz
Runs all the foo tests, bar tests and baz tests.
test-less -prove foo,bar,baz
Even after i was able to fix the compile bug in Test::Less, i was still unable to run any tests using Test::Less which has been broken since 2009. So looking at Test::Class might be the answer:
http://search.cpan.org/~ether/Test-Class-0.46/lib/Test/Class.pm
Sometimes you just want to run a single test. Commenting out other tests or writing code to skip them can be a hassle, so you can specify the TEST_METHOD environment variable. The value is expected to be a valid regular expression and, if present, only runs test methods whose names match the regular expression. Startup, setup, teardown and shutdown tests will still be run.
One easy way of doing this is by specifying the environment variable before the runtests method is called.
Running a test named customer_profile:
#! /usr/bin/perl
use Example::Test;
$ENV{TEST_METHOD} = 'customer_profile';
Test::Class->runtests;
Running all tests with customer in their name:
#! /usr/bin/perl
use Example::Test;
$ENV{TEST_METHOD} = '.*customer.*';
Test::Class->runtests;
when I doing a calculation halfway, I just found the runtime limit 50:00 may not be sufficient. So I use $bstop 1234 to stop the job 1234 and try to modify the old runtime -W 50:00 to -W 100:00
Can you suggest a command to do so?
I tried
$ bmod -W 100:00 1234
Please request for a minimum of 32 cores!
For more information, please contact XXX#XXX.
Request aborted by esub. Job not modified.
$ bmod [-W 100:00| -Wn ] 1234
-bash: -Wn]: command not found
100:00[8217]: Illegal job ID.
. Job not modified.
according to
[-W [hour:]minute[/host_name | /host_model] | -Wn]
from http://www.cisl.ucar.edu/docs/LSF/7.0.3/command_reference/bmod.cmdref.html
I don't quite understand the syntax, -Wn does it mean Wall time new
Many thanks for your help!
The first command fails because LSF calls a the mandatory esub defined by your administrator to do some preprocessing on the command line, and this is returning an error. Here's the relevant quote from the page you linked:
Like bsub, bmod calls the master esub (mesub), which invokes any
mandatory esub executables configured by an LSF administrator, and any
executable named esub (without .application) if it exists in
LSF_SERVERDIR.
You're going to have to come up with a bmod command line that passes the esub checks, but that might cause other problems because some parameters (like -n I believe) can't be changed at runtime by default so bmod will reject the request if you specify it.
The -Wn option is used to remove the run limit from the job entirely rather than change it to a different value.