I have created the following testing class:
package Test::DB;
# $Id$
#
use strict;
use warnings;
our $VERSION = '0.01';
use Carp;
use English '-no_match_vars';
use Test::Most;
use base 'Test::Class';
sub startup : Tests(startup) {
eval {
require esp_libs_db;
esp_libs_db->import;
1;
} or Carp::croak($EVAL_ERROR);
return 1;
}
sub connect : Tests(2) {
can_ok 'esp_libs_db', 'espDbConnect';
my $schema = 'test_schema';
throws_ok { espDbConnect($schema) }
qr/Cannot locate database ($schema) connection file./,
'dies when connection file does not exist';
return 1;
}
1;
When I come to run the test I get the following output:
t/run.t ..
#
# Test::DB->connect
1..2
ok 1 - esp_libs_db->can('espDbConnect')
not ok 2 - dies when connection file does not exist
# Failed test 'dies when connection file does not exist'
# at t/tests/Test/DB.pm line 39.
# (in Test::DB->connect)
# expecting: Regexp ((?-xism:Cannot locate database (test_schema) connection file.))
# found: <FONT color=red size=4 face=arial>ERROR: PM_DB_0004: Cannot locate database (test_schema) connection file.</FONT> at t/tests/Test/DB.pm line 38
# Looks like you failed 1 test of 2.
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/2 subtests
Test Summary Report
-------------------
t/run.t (Wstat: 256 Tests: 2 Failed: 1)
Failed test: 2
Non-zero exit status: 1
Files=1, Tests=2, 3 wallclock secs ( 0.44 usr 0.07 sys + 2.44 cusr 0.43 csys = 3.38 CPU)
Result: FAIL
I cannot see why the second test is failing when it should quite clearly pass
( and ) are special characters in regular expressions, try escaping them:
qr/Cannot locate database \($schema\) connection file./
Related
We have a project where we run a couple of tests with a duration of around 3 hours. And I have observed that a lot of minutes are lost in the step of generation the results. For example:
I have this script harness.pl
use strict;
use TAP::Harness;
use TAP::Formatter::HTML;
use TAP::Formatter::Console;
# Define Tests
my $tests = {
"test_1" => {
test => 'script.t',
args => ['--server', "server_1"]
},
# "test_2" => {
# test => 'script.t',
# args => ['--server', "server_2"]
# },
};
# Get arguments
my $test_args = {};
$test_args->{$_} = $tests->{$_}->{args} foreach (sort keys %$tests);
# Define formatter
# my $fmt = TAP::Formatter::HTML->new();
# $fmt->output_file( "report.html" );
my $fmt = TAP::Formatter::Console->new({verbosity=>1});
# Start testing
my $harness = TAP::Harness->new( {
test_args => $test_args,
formatter => $fmt,
merge => 1,
jobs => 1,
} );
my $result = $harness->runtests(get_tests($tests));
print( "$0 Run Time: " . ( time - $^T ) . " second(s).\n");
print "Passed: ". scalar $result->passed . "\n";
print "Failed: ". scalar $result->failed . "\n";
print "Errors: ". scalar $result->has_errors . "\n";
sub get_tests{
my ($tests) = #_;
my #tests_to_run;
TEST: foreach my $test (sort keys %$tests){
# some filtering...
push #tests_to_run, [$tests->{$test}->{test}, $test]
}
# some prints...
return #tests_to_run;
}
And a couple of test, lets take an example of the following script.:
use strict;
use warnings;
use Test::More;
use Getopt::Long qw(GetOptions);
my ($server);
GetOptions(
'server|s=s' => \$server,
) or die "[ERROR] Invalid options passed to $0\n";
my $number_of_tests = 100000;
plan tests => 2 + $number_of_tests;
ok (1, "start");
foreach my $test_nr (0..$number_of_tests-1){
ok(1,"Test numbber: $test_nr in $server");
}
ok ($server eq 'server_1', "Finished");
done_testing;
print( "$0 Run Time: " . ( time - $^T ) . " second(s).\n");
Now I run the tests with the reports of HTML:
# Define formatter
my $fmt = TAP::Formatter::HTML->new();
$fmt->output_file( "report.html" );
And I get the following results:
ok 100002 - Finished
script.t Run Time: 34 second(s).
harness.pl Run Time: 42 second(s).
Passed: 100002
Failed: 0
Errors: 0
So 8 seconds after finishing the script.
After that I tried with the default console formatter:
# Define formatter
my $fmt = TAP::Formatter::Console->new({verbosity=>1});
And the result:
ok 100002 - Finished
script.t Run Time: 34 second(s).
ok
All tests successful.
Files=1, Tests=100002, 34 wallclock secs (16.20 usr + 3.61 sys = 19.81 CPU)
Result: PASS
harness.pl Run Time: 34 second(s).
Passed: 100002
Failed: 0
Errors: 0
So this looks quite good, this makes me think that the issue comes from creating a really big HTML report which in this example would generate a 13,854 KB HTML file.
So I go outside of this simple example to my project. And run a simplifed version my test which generates a report like the following:
As you can see it is not a cray amount of tests, just around 35 with a file of 35 KB I don't think anything which could have many computing issues.
And then I look at the logs with the same structure where I print the last line of the script and I get this cray difference of time:
t/long-test.t Run Time: 117 second(s).
# Looks like you failed 2 tests of 34.
Passed: 32
Failed: 2
Errors: 2
Harness.pl Run Time: 212 second(s).
The I think again, ok something must be wrong with this HTML library. So I go full on it and remove the HTML report and use just the console formatter:
# at t/mylongtest.t line 237.
t/mylongtest.t Run Time: 128 second(s).
# Looks like you failed 2 tests of 34.
Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/34 subtests
Test Summary Report
-------------------
mylongtest (Wstat: 512 Tests: 34 Failed: 2)
Failed tests: 33-34
Non-zero exit status: 2
Files=1, Tests=34, 215 wallclock secs ( 0.22 usr + 0.12 sys = 0.34 CPU)
Result: FAIL
Passed: 32
Failed: 2
Errors: 2
Harness.pl Run Time: 216 second(s).
And then again I get this cray difference from 128 seconds from the end of the test to 216 seconds at the end of the harness.
The I tried to remove completly the harness script and just use prove and I get the same
So my question is. What is happening between the done_testing / end of a test and the reporting of the results of the the test harness and what can I do to improve this?
Update: I finally was able to reproduce the issue in a minimal example, and it seems that the problem is not TAP but what happens at the end of the script with a garbage collector or something similar. I create two scripts, first one delete_me.pl:
use strict;
use JSON;
my $huge_json_string = '{';
foreach my $i (0..20000000){
$huge_json_string .= '"key_'.$i.'": "some_value'.$i.'",';
}
$huge_json_string =~ s/,$/}/;#
print "String generated\n";
my $all_to_ram = decode_json($huge_json_string);
print "Hash generated\n";
print( "$0 Run Time: " . ( time - $^T ) . " second(s).\n");
And now I call it from other script delete_me_caller.pl:
use strict;
system("perl delete_me.pl");
print( "$0 Run Time: " . ( time - $^T ) . " second(s).\n");
And here is the surprise:
String generated
Hash generated
delete_me.pl Run Time: 20 second(s).
delete_me_caller.pl Run Time: 42 second(s).
So it takes more time to end the script than to generate the data and load it to memory.
Note: For this script I was using my personal computer with 32GB of ram and i7 8650 U so this may take a lot longer with less RAM or power.
I have the following test script that utilizes Test::Mojo. When I run it from the command line using perl, it outputs correctly. However, when I run it through "prove -v", the Mojo logging is duplicated and one of them isn't piped through "on message".
#!/usr/bin/env perl
use strict;
use warnings;
use Test::More tests => 1;
use Mojolicious::Lite;
use Test::Mojo;
app->log->on(
message => sub {
my ( $log, $level, #lines ) = #_;
note "MojoLog $level: #lines";
}
);
get '/debug/mojo/req_url' => sub {
my $c = shift;
$c->render( text => $c->req->url );
};
subtest 'Mojo - $c->req->url' => sub {
plan tests => 3;
my $t = Test::Mojo->new;
$t->get_ok('/debug/mojo/req_url') #
->status_is(200) #
->content_is('/debug/mojo/req_url');
};
The output when run directly:
$ perl dup_logging.t
1..1
# Subtest: Mojo - $c->req->url
1..3
# MojoLog debug: GET "/debug/mojo/req_url"
# MojoLog debug: Routing to a callback
# MojoLog debug: 200 OK (0.000797s, 1254.705/s)
ok 1 - GET /debug/mojo/req_url
ok 2 - 200 OK
ok 3 - exact match for content
ok 1 - Mojo - $c->req->url
And the output when run through prove:
$ prove -v dup_logging.t
dup_logging.t ..
1..1
# Subtest: Mojo - $c->req->url
1..3
[Thu Mar 8 12:16:35 2018] [debug] GET "/debug/mojo/req_url"
# MojoLog debug: GET "/debug/mojo/req_url"
[Thu Mar 8 12:16:35 2018] [debug] Routing to a callback
# MojoLog debug: Routing to a callback
[Thu Mar 8 12:16:35 2018] [debug] 200 OK (0.000842s, 1187.648/s)
# MojoLog debug: 200 OK (0.000842s, 1187.648/s)
ok 1 - GET /debug/mojo/req_url
ok 2 - 200 OK
ok 3 - exact match for content
ok 1 - Mojo - $c->req->url
ok
All tests successful.
Files=1, Tests=1, 1 wallclock secs ( 0.03 usr 0.01 sys + 0.34 cusr 0.03 csys = 0.41 CPU)
Result: PASS
The following is my version information:
$ perl -MMojolicious -E 'say Mojolicious->VERSION'
7.14
$ prove --version
TAP::Harness v3.36 and Perl v5.16.3
I discovered that one way to avoid this problem is to set the MOJO_LOG_LEVEL environment variable at the top of the script.
$ENV{MOJO_LOG_LEVEL} = 'fatal';
Any other suggestions on how to get prove and Test::Mojo to play well together with regard to the logging?
The prove testrunner uses the TAP::Harness infrastructure. When you run prove -v, this will set the HARNESS_IS_VERBOSE environment variable.
Then, Mojo::Test picks up this environment variable:
# Silent or loud tests
$ENV{MOJO_LOG_LEVEL} ||= $ENV{HARNESS_IS_VERBOSE} ? 'debug' : 'fatal';
You therefore get Mojo's debug log messages when running prove -v.
It seems that manually setting the MOJO_LOG_LEVEL env variable is the best approach if you do not want this output.
I've seen this in test examples:
eval "use Test::Module";
plan skip_all => "Test::Module required to test ..." if $#;
But when I try this with a module which requires to explicitly import a function I get an error.
eval "use Test::Module qw( function )";
plan skip_all => "Test::Module required to test ..." if $#;
Can't locate object method "function" via package "1" (perhaps you forgot to load "1"?)
How could I prevent this error?
If I load the module without eval it works fine.
Example:
use warnings;
use strict;
use Test::More;
eval "use Test::Warnings qw( :all )";
plan skip_all => "Test::Warnings required" if $#;
like( warning { warn "Hello" }, qr/Hello/, "Test for 'Hello' warning" );
done_testing();
Output:
PERL_DL_NONLAZY=1 /usr/local/bin/perl "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/warn.t .. Hello at t/warn.t line 8.
Can't locate object method "warning" via package "1" (perhaps you forgot to load "1"?) at t/warn.t line 8.
t/warn.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run
Test Summary Report
-------------------
t/warn.t (Wstat: 65280 Tests: 0 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=0, 0 wallclock secs ( 0.01 usr + 0.00 sys = 0.01 CPU)
Result: FAIL
Failed 1/1 test programs. 0/0 subtests failed.
make: *** [test_dynamic] Fehler 255
Try forcing the import to happen at compile-time:
BEGIN {
eval "use Test::Module qw( function ); 1"
or plan skip_all => "Test::Module required: $#";
};
Also, take a look at Test::Requires, which makes doing this sort of thing much easier.
eval {
require My::Module;
My::Module->import( function );
}
Your code has nothing to do with the error:
eval "use Test::Module qw( function )";
plan skip_all => "Test::Module required to test ..." if $#;
The above can't throw an error since the eval is trapping it, and it's printing out completely different text.
The following WOULD throw the error:
my $aa = 1;
$aa->function('foobar');
Therefore, either just follow the line number that the error reported, or just look for places where you're using function on a variable you mistakenly are treating like an object.
I have a working Coro program which I'm writing test scripts for
This is a cut down version of how I'm trying to test
use Test::More ;#tests => 9;
BEGIN{
use_ok( 'EV' ) || print "Bail out!\n";
use_ok( 'Coro' ) || print "Bail out!\n";
use_ok( 'AnyEvent' ) || print "Bail out!\n";
use_ok( 'Coro::AnyEvent' ) || print "Bail out!\n";
}
my #coro;
push #coro, async sub{ok(Coro::AnyEvent::sleep(2), 'sleep')};
push #coro, async sub{ok (1 == 1, 'one equals one')};
push #coro, async sub{isnt (1, 2, 'one does not equal two')};
#push #coro, async sub{is(EV::unloop, undef, 'unloop')};
#EV::run;
map {ok($_->join, 'join')} #coro;
which gives me
t/coro-test.t ..
ok 1 - use EV;
ok 2 - use Coro;
ok 3 - use AnyEvent;
ok 4 - use Coro::AnyEvent;
ok 5 - one equals one
ok 6 - one does not equal two
Undefined subroutine &main:: called at /usr/lib/perl5/Coro/AnyEvent.pm line 218.
Dubious, test returned 9 (wstat 2304, 0x900)
All 6 subtests passed
Test Summary Report
-------------------
t/coro-test.t (Wstat: 2304 Tests: 6 Failed: 0)
Non-zero exit status: 9
Parse errors: No plan found in TAP output
Files=1, Tests=6, 0 wallclock secs ( 0.02 usr 0.01 sys + 0.18 cusr 0.03 csys = 0.24 CPU)
Result: FAIL
My (real) program sets coroutines off then they sleep whist they have nothing to do so its not a contrived example.
Any help gratefully received. (I think the unloop and EV::run aren't required)
I can't imagine a reason why Test::More wouldn't work, and in fact, it works for me with current versions of Coro and AnyEvent (and either EV or Perl as event backends).
I think your problem might be that Coro::AnyEvent::sleep returns something that Test::More doesn't like on your system. Assuming Coro::AnyEvent::sleep returns anything specific is looking for trouble anyway - the return value(s), if any, are undocumented, so expecting it to be something specific makes your program rely on undocumented behaviour, and failure is an expected outcome.
What could be the reason that this test (t/00-load.t) fails with OS *BSD and Perl version from 5.12.0 to 5.12.2?
#!perl -T
use 5.010001;
use strict;
use warnings;
use Test::More tests => 1;
BEGIN {
use_ok( 'Term::Choose' ) || print "Bail out!\n";
}
diag( "Testing Term::Choose $Term::Choose::VERSION, Perl $], $^X" );
Error:
t/00-load.t (Wstat: 139 Tests: 0 Failed: 0)
Non-zero wait status: 139
Parse errors: Bad plan. You planned 1 tests but ran 0.
The exit status 139 is 128 + 11, meaning signal 11, which is SIGSEGV, indicating a segmentation fault.