I have a working Coro program which I'm writing test scripts for
This is a cut down version of how I'm trying to test
use Test::More ;#tests => 9;
BEGIN{
use_ok( 'EV' ) || print "Bail out!\n";
use_ok( 'Coro' ) || print "Bail out!\n";
use_ok( 'AnyEvent' ) || print "Bail out!\n";
use_ok( 'Coro::AnyEvent' ) || print "Bail out!\n";
}
my #coro;
push #coro, async sub{ok(Coro::AnyEvent::sleep(2), 'sleep')};
push #coro, async sub{ok (1 == 1, 'one equals one')};
push #coro, async sub{isnt (1, 2, 'one does not equal two')};
#push #coro, async sub{is(EV::unloop, undef, 'unloop')};
#EV::run;
map {ok($_->join, 'join')} #coro;
which gives me
t/coro-test.t ..
ok 1 - use EV;
ok 2 - use Coro;
ok 3 - use AnyEvent;
ok 4 - use Coro::AnyEvent;
ok 5 - one equals one
ok 6 - one does not equal two
Undefined subroutine &main:: called at /usr/lib/perl5/Coro/AnyEvent.pm line 218.
Dubious, test returned 9 (wstat 2304, 0x900)
All 6 subtests passed
Test Summary Report
-------------------
t/coro-test.t (Wstat: 2304 Tests: 6 Failed: 0)
Non-zero exit status: 9
Parse errors: No plan found in TAP output
Files=1, Tests=6, 0 wallclock secs ( 0.02 usr 0.01 sys + 0.18 cusr 0.03 csys = 0.24 CPU)
Result: FAIL
My (real) program sets coroutines off then they sleep whist they have nothing to do so its not a contrived example.
Any help gratefully received. (I think the unloop and EV::run aren't required)
I can't imagine a reason why Test::More wouldn't work, and in fact, it works for me with current versions of Coro and AnyEvent (and either EV or Perl as event backends).
I think your problem might be that Coro::AnyEvent::sleep returns something that Test::More doesn't like on your system. Assuming Coro::AnyEvent::sleep returns anything specific is looking for trouble anyway - the return value(s), if any, are undocumented, so expecting it to be something specific makes your program rely on undocumented behaviour, and failure is an expected outcome.
Related
We have a project where we run a couple of tests with a duration of around 3 hours. And I have observed that a lot of minutes are lost in the step of generation the results. For example:
I have this script harness.pl
use strict;
use TAP::Harness;
use TAP::Formatter::HTML;
use TAP::Formatter::Console;
# Define Tests
my $tests = {
"test_1" => {
test => 'script.t',
args => ['--server', "server_1"]
},
# "test_2" => {
# test => 'script.t',
# args => ['--server', "server_2"]
# },
};
# Get arguments
my $test_args = {};
$test_args->{$_} = $tests->{$_}->{args} foreach (sort keys %$tests);
# Define formatter
# my $fmt = TAP::Formatter::HTML->new();
# $fmt->output_file( "report.html" );
my $fmt = TAP::Formatter::Console->new({verbosity=>1});
# Start testing
my $harness = TAP::Harness->new( {
test_args => $test_args,
formatter => $fmt,
merge => 1,
jobs => 1,
} );
my $result = $harness->runtests(get_tests($tests));
print( "$0 Run Time: " . ( time - $^T ) . " second(s).\n");
print "Passed: ". scalar $result->passed . "\n";
print "Failed: ". scalar $result->failed . "\n";
print "Errors: ". scalar $result->has_errors . "\n";
sub get_tests{
my ($tests) = #_;
my #tests_to_run;
TEST: foreach my $test (sort keys %$tests){
# some filtering...
push #tests_to_run, [$tests->{$test}->{test}, $test]
}
# some prints...
return #tests_to_run;
}
And a couple of test, lets take an example of the following script.:
use strict;
use warnings;
use Test::More;
use Getopt::Long qw(GetOptions);
my ($server);
GetOptions(
'server|s=s' => \$server,
) or die "[ERROR] Invalid options passed to $0\n";
my $number_of_tests = 100000;
plan tests => 2 + $number_of_tests;
ok (1, "start");
foreach my $test_nr (0..$number_of_tests-1){
ok(1,"Test numbber: $test_nr in $server");
}
ok ($server eq 'server_1', "Finished");
done_testing;
print( "$0 Run Time: " . ( time - $^T ) . " second(s).\n");
Now I run the tests with the reports of HTML:
# Define formatter
my $fmt = TAP::Formatter::HTML->new();
$fmt->output_file( "report.html" );
And I get the following results:
ok 100002 - Finished
script.t Run Time: 34 second(s).
harness.pl Run Time: 42 second(s).
Passed: 100002
Failed: 0
Errors: 0
So 8 seconds after finishing the script.
After that I tried with the default console formatter:
# Define formatter
my $fmt = TAP::Formatter::Console->new({verbosity=>1});
And the result:
ok 100002 - Finished
script.t Run Time: 34 second(s).
ok
All tests successful.
Files=1, Tests=100002, 34 wallclock secs (16.20 usr + 3.61 sys = 19.81 CPU)
Result: PASS
harness.pl Run Time: 34 second(s).
Passed: 100002
Failed: 0
Errors: 0
So this looks quite good, this makes me think that the issue comes from creating a really big HTML report which in this example would generate a 13,854 KB HTML file.
So I go outside of this simple example to my project. And run a simplifed version my test which generates a report like the following:
As you can see it is not a cray amount of tests, just around 35 with a file of 35 KB I don't think anything which could have many computing issues.
And then I look at the logs with the same structure where I print the last line of the script and I get this cray difference of time:
t/long-test.t Run Time: 117 second(s).
# Looks like you failed 2 tests of 34.
Passed: 32
Failed: 2
Errors: 2
Harness.pl Run Time: 212 second(s).
The I think again, ok something must be wrong with this HTML library. So I go full on it and remove the HTML report and use just the console formatter:
# at t/mylongtest.t line 237.
t/mylongtest.t Run Time: 128 second(s).
# Looks like you failed 2 tests of 34.
Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/34 subtests
Test Summary Report
-------------------
mylongtest (Wstat: 512 Tests: 34 Failed: 2)
Failed tests: 33-34
Non-zero exit status: 2
Files=1, Tests=34, 215 wallclock secs ( 0.22 usr + 0.12 sys = 0.34 CPU)
Result: FAIL
Passed: 32
Failed: 2
Errors: 2
Harness.pl Run Time: 216 second(s).
And then again I get this cray difference from 128 seconds from the end of the test to 216 seconds at the end of the harness.
The I tried to remove completly the harness script and just use prove and I get the same
So my question is. What is happening between the done_testing / end of a test and the reporting of the results of the the test harness and what can I do to improve this?
Update: I finally was able to reproduce the issue in a minimal example, and it seems that the problem is not TAP but what happens at the end of the script with a garbage collector or something similar. I create two scripts, first one delete_me.pl:
use strict;
use JSON;
my $huge_json_string = '{';
foreach my $i (0..20000000){
$huge_json_string .= '"key_'.$i.'": "some_value'.$i.'",';
}
$huge_json_string =~ s/,$/}/;#
print "String generated\n";
my $all_to_ram = decode_json($huge_json_string);
print "Hash generated\n";
print( "$0 Run Time: " . ( time - $^T ) . " second(s).\n");
And now I call it from other script delete_me_caller.pl:
use strict;
system("perl delete_me.pl");
print( "$0 Run Time: " . ( time - $^T ) . " second(s).\n");
And here is the surprise:
String generated
Hash generated
delete_me.pl Run Time: 20 second(s).
delete_me_caller.pl Run Time: 42 second(s).
So it takes more time to end the script than to generate the data and load it to memory.
Note: For this script I was using my personal computer with 32GB of ram and i7 8650 U so this may take a lot longer with less RAM or power.
I am trying to create unit tests for my script using the Test::More and Test::Exception libraries.
I have read these articles How to test for exceptions in Perl and Test::Exception.
The first article describes exactly what I need, to test my subroutine for throwing exception or dieing.
But I cannot get it working. Consider some examples
#!/usr/bin/env perl
package My::SuperModule;
use strict;
use warnings;
use Net::Ping;
use Utils::Variables::Validator;
sub new
{
die "Hello";
#Getting class name from, stored in $_[0]
my $class = shift;
#Getting user name from arguments $_[1]
my $user_name = shift;
........
}
And my test file
use warnings; # this warns you of bad practices
use strict; # this prevents silly errors
use Test::More; # for the is() and isnt() functions
use Test::Exception;
do './My/SuperModule.pm';
#Testing module loading
print "=================Testing module loading=================\n";
use_ok ( 'My::SuperModule' );
use_ok ( 'My::SuperModule', 'new' );
#Testing module subroutines
dies_ok { My::SuperModule->new() } "Died in class constructor";
sub div {
my ( $a, $b ) = #_;
return $a / $b;
};
dies_ok { div( 1, 0 ) } 'divide by zero detected';
It stops executing script in any case, but I need just to handle if died, I need to test this because I manually invoke die if data is invalid or something other , but it dies and doesn't continue to execute the script further. Giving me a message
Uncaught exception from user code:
Hello at ../libs/My/SuperModule.pm line 31.
My::SuperModule::new('My::SuperModule', '') called at SuperModule.t line 24
# Tests were run but no plan was declared and done_testing() was not seen.
# Looks like your test exited with 2 just after 8.
But if to use division by zero it works like I want
ok 16 - divide by zero detected
So it fails but doesn't terminate execution of the script.
I am newbie in Perl, so cannot solve the problem by myself, maybe there is not problem at all, just no way to do what I want.
Please suggest what to do or say where is my fault here.
EDIT
I have just tried to divide by zero inside my module new subroutine and here is the message I got.
Illegal division by zero at ../libs/My/SuperModule.pm line 33 (#1)
(F) You tried to divide a number by 0. Either something was wrong in
your logic, or you need to put a conditional in to guard against
meaningless input.
I really cannot figure out what is going on. Please help with this.
Here is a minimal example that works:
package My::SuperModule;
use strict;
use warnings;
sub new {
die "Hello";
}
package main;
run() unless caller;
use Test::More;
use Test::Exception;
sub run {
dies_ok { My::SuperModule->new } "dies ok";
done_testing;
}
Output:
C:\...\t> prove -v my.pl
my.pl ..
ok 1 - dies ok
1..1
ok
All tests successful.
Files=1, Tests=1, 0 wallclock secs ( 0.09 usr + 0.00 sys = 0.09 CPU)
Result: PASS
Note that, for the purpose of providing a self-contained example, I combined the module code and the test script into a single file.
Also note that I did not clutter the test script with unnecessary print statements.
If you want, you can use diag to show some output:
diag "Checking if Test::Exception::dies_ok will catch die in new";
dies_ok { My::SuperModule->new } "dies ok";
done_testing;
Output:
C:\...\t> prove my.pl
my.pl .. # Checking if Test::Exception::dies_ok will catch die in new
my.pl .. ok
All tests successful.
Files=1, Tests=1, 0 wallclock secs ( 0.05 usr + 0.02 sys = 0.06 CPU)
Result: PASS
As opposed to plain print statements, diag output will actually be shown when tests are run from a harness such as prove.
It works for me. All I needed to do was:
Comment out the two modules that the test wasn't using (Net::Ping and Utils::Variables::Validator - I don't have them installed).
Add 1; as the last line of the module - so it returns a true value.
Remove the extra require and use_ok from the test.
Added done_testing; to the end of the test - so the test harness knows that it got to the end of the tests.
You probably don't care about the first item on that list, but if you fix the others, it should work for you.
I've seen this in test examples:
eval "use Test::Module";
plan skip_all => "Test::Module required to test ..." if $#;
But when I try this with a module which requires to explicitly import a function I get an error.
eval "use Test::Module qw( function )";
plan skip_all => "Test::Module required to test ..." if $#;
Can't locate object method "function" via package "1" (perhaps you forgot to load "1"?)
How could I prevent this error?
If I load the module without eval it works fine.
Example:
use warnings;
use strict;
use Test::More;
eval "use Test::Warnings qw( :all )";
plan skip_all => "Test::Warnings required" if $#;
like( warning { warn "Hello" }, qr/Hello/, "Test for 'Hello' warning" );
done_testing();
Output:
PERL_DL_NONLAZY=1 /usr/local/bin/perl "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/warn.t .. Hello at t/warn.t line 8.
Can't locate object method "warning" via package "1" (perhaps you forgot to load "1"?) at t/warn.t line 8.
t/warn.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run
Test Summary Report
-------------------
t/warn.t (Wstat: 65280 Tests: 0 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=0, 0 wallclock secs ( 0.01 usr + 0.00 sys = 0.01 CPU)
Result: FAIL
Failed 1/1 test programs. 0/0 subtests failed.
make: *** [test_dynamic] Fehler 255
Try forcing the import to happen at compile-time:
BEGIN {
eval "use Test::Module qw( function ); 1"
or plan skip_all => "Test::Module required: $#";
};
Also, take a look at Test::Requires, which makes doing this sort of thing much easier.
eval {
require My::Module;
My::Module->import( function );
}
Your code has nothing to do with the error:
eval "use Test::Module qw( function )";
plan skip_all => "Test::Module required to test ..." if $#;
The above can't throw an error since the eval is trapping it, and it's printing out completely different text.
The following WOULD throw the error:
my $aa = 1;
$aa->function('foobar');
Therefore, either just follow the line number that the error reported, or just look for places where you're using function on a variable you mistakenly are treating like an object.
What could be the reason that this test (t/00-load.t) fails with OS *BSD and Perl version from 5.12.0 to 5.12.2?
#!perl -T
use 5.010001;
use strict;
use warnings;
use Test::More tests => 1;
BEGIN {
use_ok( 'Term::Choose' ) || print "Bail out!\n";
}
diag( "Testing Term::Choose $Term::Choose::VERSION, Perl $], $^X" );
Error:
t/00-load.t (Wstat: 139 Tests: 0 Failed: 0)
Non-zero wait status: 139
Parse errors: Bad plan. You planned 1 tests but ran 0.
The exit status 139 is 128 + 11, meaning signal 11, which is SIGSEGV, indicating a segmentation fault.
I have created the following testing class:
package Test::DB;
# $Id$
#
use strict;
use warnings;
our $VERSION = '0.01';
use Carp;
use English '-no_match_vars';
use Test::Most;
use base 'Test::Class';
sub startup : Tests(startup) {
eval {
require esp_libs_db;
esp_libs_db->import;
1;
} or Carp::croak($EVAL_ERROR);
return 1;
}
sub connect : Tests(2) {
can_ok 'esp_libs_db', 'espDbConnect';
my $schema = 'test_schema';
throws_ok { espDbConnect($schema) }
qr/Cannot locate database ($schema) connection file./,
'dies when connection file does not exist';
return 1;
}
1;
When I come to run the test I get the following output:
t/run.t ..
#
# Test::DB->connect
1..2
ok 1 - esp_libs_db->can('espDbConnect')
not ok 2 - dies when connection file does not exist
# Failed test 'dies when connection file does not exist'
# at t/tests/Test/DB.pm line 39.
# (in Test::DB->connect)
# expecting: Regexp ((?-xism:Cannot locate database (test_schema) connection file.))
# found: <FONT color=red size=4 face=arial>ERROR: PM_DB_0004: Cannot locate database (test_schema) connection file.</FONT> at t/tests/Test/DB.pm line 38
# Looks like you failed 1 test of 2.
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/2 subtests
Test Summary Report
-------------------
t/run.t (Wstat: 256 Tests: 2 Failed: 1)
Failed test: 2
Non-zero exit status: 1
Files=1, Tests=2, 3 wallclock secs ( 0.44 usr 0.07 sys + 2.44 cusr 0.43 csys = 3.38 CPU)
Result: FAIL
I cannot see why the second test is failing when it should quite clearly pass
( and ) are special characters in regular expressions, try escaping them:
qr/Cannot locate database \($schema\) connection file./