I have the following test script that utilizes Test::Mojo. When I run it from the command line using perl, it outputs correctly. However, when I run it through "prove -v", the Mojo logging is duplicated and one of them isn't piped through "on message".
#!/usr/bin/env perl
use strict;
use warnings;
use Test::More tests => 1;
use Mojolicious::Lite;
use Test::Mojo;
app->log->on(
message => sub {
my ( $log, $level, #lines ) = #_;
note "MojoLog $level: #lines";
}
);
get '/debug/mojo/req_url' => sub {
my $c = shift;
$c->render( text => $c->req->url );
};
subtest 'Mojo - $c->req->url' => sub {
plan tests => 3;
my $t = Test::Mojo->new;
$t->get_ok('/debug/mojo/req_url') #
->status_is(200) #
->content_is('/debug/mojo/req_url');
};
The output when run directly:
$ perl dup_logging.t
1..1
# Subtest: Mojo - $c->req->url
1..3
# MojoLog debug: GET "/debug/mojo/req_url"
# MojoLog debug: Routing to a callback
# MojoLog debug: 200 OK (0.000797s, 1254.705/s)
ok 1 - GET /debug/mojo/req_url
ok 2 - 200 OK
ok 3 - exact match for content
ok 1 - Mojo - $c->req->url
And the output when run through prove:
$ prove -v dup_logging.t
dup_logging.t ..
1..1
# Subtest: Mojo - $c->req->url
1..3
[Thu Mar 8 12:16:35 2018] [debug] GET "/debug/mojo/req_url"
# MojoLog debug: GET "/debug/mojo/req_url"
[Thu Mar 8 12:16:35 2018] [debug] Routing to a callback
# MojoLog debug: Routing to a callback
[Thu Mar 8 12:16:35 2018] [debug] 200 OK (0.000842s, 1187.648/s)
# MojoLog debug: 200 OK (0.000842s, 1187.648/s)
ok 1 - GET /debug/mojo/req_url
ok 2 - 200 OK
ok 3 - exact match for content
ok 1 - Mojo - $c->req->url
ok
All tests successful.
Files=1, Tests=1, 1 wallclock secs ( 0.03 usr 0.01 sys + 0.34 cusr 0.03 csys = 0.41 CPU)
Result: PASS
The following is my version information:
$ perl -MMojolicious -E 'say Mojolicious->VERSION'
7.14
$ prove --version
TAP::Harness v3.36 and Perl v5.16.3
I discovered that one way to avoid this problem is to set the MOJO_LOG_LEVEL environment variable at the top of the script.
$ENV{MOJO_LOG_LEVEL} = 'fatal';
Any other suggestions on how to get prove and Test::Mojo to play well together with regard to the logging?
The prove testrunner uses the TAP::Harness infrastructure. When you run prove -v, this will set the HARNESS_IS_VERBOSE environment variable.
Then, Mojo::Test picks up this environment variable:
# Silent or loud tests
$ENV{MOJO_LOG_LEVEL} ||= $ENV{HARNESS_IS_VERBOSE} ? 'debug' : 'fatal';
You therefore get Mojo's debug log messages when running prove -v.
It seems that manually setting the MOJO_LOG_LEVEL env variable is the best approach if you do not want this output.
Related
We have a project where we run a couple of tests with a duration of around 3 hours. And I have observed that a lot of minutes are lost in the step of generation the results. For example:
I have this script harness.pl
use strict;
use TAP::Harness;
use TAP::Formatter::HTML;
use TAP::Formatter::Console;
# Define Tests
my $tests = {
"test_1" => {
test => 'script.t',
args => ['--server', "server_1"]
},
# "test_2" => {
# test => 'script.t',
# args => ['--server', "server_2"]
# },
};
# Get arguments
my $test_args = {};
$test_args->{$_} = $tests->{$_}->{args} foreach (sort keys %$tests);
# Define formatter
# my $fmt = TAP::Formatter::HTML->new();
# $fmt->output_file( "report.html" );
my $fmt = TAP::Formatter::Console->new({verbosity=>1});
# Start testing
my $harness = TAP::Harness->new( {
test_args => $test_args,
formatter => $fmt,
merge => 1,
jobs => 1,
} );
my $result = $harness->runtests(get_tests($tests));
print( "$0 Run Time: " . ( time - $^T ) . " second(s).\n");
print "Passed: ". scalar $result->passed . "\n";
print "Failed: ". scalar $result->failed . "\n";
print "Errors: ". scalar $result->has_errors . "\n";
sub get_tests{
my ($tests) = #_;
my #tests_to_run;
TEST: foreach my $test (sort keys %$tests){
# some filtering...
push #tests_to_run, [$tests->{$test}->{test}, $test]
}
# some prints...
return #tests_to_run;
}
And a couple of test, lets take an example of the following script.:
use strict;
use warnings;
use Test::More;
use Getopt::Long qw(GetOptions);
my ($server);
GetOptions(
'server|s=s' => \$server,
) or die "[ERROR] Invalid options passed to $0\n";
my $number_of_tests = 100000;
plan tests => 2 + $number_of_tests;
ok (1, "start");
foreach my $test_nr (0..$number_of_tests-1){
ok(1,"Test numbber: $test_nr in $server");
}
ok ($server eq 'server_1', "Finished");
done_testing;
print( "$0 Run Time: " . ( time - $^T ) . " second(s).\n");
Now I run the tests with the reports of HTML:
# Define formatter
my $fmt = TAP::Formatter::HTML->new();
$fmt->output_file( "report.html" );
And I get the following results:
ok 100002 - Finished
script.t Run Time: 34 second(s).
harness.pl Run Time: 42 second(s).
Passed: 100002
Failed: 0
Errors: 0
So 8 seconds after finishing the script.
After that I tried with the default console formatter:
# Define formatter
my $fmt = TAP::Formatter::Console->new({verbosity=>1});
And the result:
ok 100002 - Finished
script.t Run Time: 34 second(s).
ok
All tests successful.
Files=1, Tests=100002, 34 wallclock secs (16.20 usr + 3.61 sys = 19.81 CPU)
Result: PASS
harness.pl Run Time: 34 second(s).
Passed: 100002
Failed: 0
Errors: 0
So this looks quite good, this makes me think that the issue comes from creating a really big HTML report which in this example would generate a 13,854 KB HTML file.
So I go outside of this simple example to my project. And run a simplifed version my test which generates a report like the following:
As you can see it is not a cray amount of tests, just around 35 with a file of 35 KB I don't think anything which could have many computing issues.
And then I look at the logs with the same structure where I print the last line of the script and I get this cray difference of time:
t/long-test.t Run Time: 117 second(s).
# Looks like you failed 2 tests of 34.
Passed: 32
Failed: 2
Errors: 2
Harness.pl Run Time: 212 second(s).
The I think again, ok something must be wrong with this HTML library. So I go full on it and remove the HTML report and use just the console formatter:
# at t/mylongtest.t line 237.
t/mylongtest.t Run Time: 128 second(s).
# Looks like you failed 2 tests of 34.
Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/34 subtests
Test Summary Report
-------------------
mylongtest (Wstat: 512 Tests: 34 Failed: 2)
Failed tests: 33-34
Non-zero exit status: 2
Files=1, Tests=34, 215 wallclock secs ( 0.22 usr + 0.12 sys = 0.34 CPU)
Result: FAIL
Passed: 32
Failed: 2
Errors: 2
Harness.pl Run Time: 216 second(s).
And then again I get this cray difference from 128 seconds from the end of the test to 216 seconds at the end of the harness.
The I tried to remove completly the harness script and just use prove and I get the same
So my question is. What is happening between the done_testing / end of a test and the reporting of the results of the the test harness and what can I do to improve this?
Update: I finally was able to reproduce the issue in a minimal example, and it seems that the problem is not TAP but what happens at the end of the script with a garbage collector or something similar. I create two scripts, first one delete_me.pl:
use strict;
use JSON;
my $huge_json_string = '{';
foreach my $i (0..20000000){
$huge_json_string .= '"key_'.$i.'": "some_value'.$i.'",';
}
$huge_json_string =~ s/,$/}/;#
print "String generated\n";
my $all_to_ram = decode_json($huge_json_string);
print "Hash generated\n";
print( "$0 Run Time: " . ( time - $^T ) . " second(s).\n");
And now I call it from other script delete_me_caller.pl:
use strict;
system("perl delete_me.pl");
print( "$0 Run Time: " . ( time - $^T ) . " second(s).\n");
And here is the surprise:
String generated
Hash generated
delete_me.pl Run Time: 20 second(s).
delete_me_caller.pl Run Time: 42 second(s).
So it takes more time to end the script than to generate the data and load it to memory.
Note: For this script I was using my personal computer with 32GB of ram and i7 8650 U so this may take a lot longer with less RAM or power.
I am trying to create unit tests for my script using the Test::More and Test::Exception libraries.
I have read these articles How to test for exceptions in Perl and Test::Exception.
The first article describes exactly what I need, to test my subroutine for throwing exception or dieing.
But I cannot get it working. Consider some examples
#!/usr/bin/env perl
package My::SuperModule;
use strict;
use warnings;
use Net::Ping;
use Utils::Variables::Validator;
sub new
{
die "Hello";
#Getting class name from, stored in $_[0]
my $class = shift;
#Getting user name from arguments $_[1]
my $user_name = shift;
........
}
And my test file
use warnings; # this warns you of bad practices
use strict; # this prevents silly errors
use Test::More; # for the is() and isnt() functions
use Test::Exception;
do './My/SuperModule.pm';
#Testing module loading
print "=================Testing module loading=================\n";
use_ok ( 'My::SuperModule' );
use_ok ( 'My::SuperModule', 'new' );
#Testing module subroutines
dies_ok { My::SuperModule->new() } "Died in class constructor";
sub div {
my ( $a, $b ) = #_;
return $a / $b;
};
dies_ok { div( 1, 0 ) } 'divide by zero detected';
It stops executing script in any case, but I need just to handle if died, I need to test this because I manually invoke die if data is invalid or something other , but it dies and doesn't continue to execute the script further. Giving me a message
Uncaught exception from user code:
Hello at ../libs/My/SuperModule.pm line 31.
My::SuperModule::new('My::SuperModule', '') called at SuperModule.t line 24
# Tests were run but no plan was declared and done_testing() was not seen.
# Looks like your test exited with 2 just after 8.
But if to use division by zero it works like I want
ok 16 - divide by zero detected
So it fails but doesn't terminate execution of the script.
I am newbie in Perl, so cannot solve the problem by myself, maybe there is not problem at all, just no way to do what I want.
Please suggest what to do or say where is my fault here.
EDIT
I have just tried to divide by zero inside my module new subroutine and here is the message I got.
Illegal division by zero at ../libs/My/SuperModule.pm line 33 (#1)
(F) You tried to divide a number by 0. Either something was wrong in
your logic, or you need to put a conditional in to guard against
meaningless input.
I really cannot figure out what is going on. Please help with this.
Here is a minimal example that works:
package My::SuperModule;
use strict;
use warnings;
sub new {
die "Hello";
}
package main;
run() unless caller;
use Test::More;
use Test::Exception;
sub run {
dies_ok { My::SuperModule->new } "dies ok";
done_testing;
}
Output:
C:\...\t> prove -v my.pl
my.pl ..
ok 1 - dies ok
1..1
ok
All tests successful.
Files=1, Tests=1, 0 wallclock secs ( 0.09 usr + 0.00 sys = 0.09 CPU)
Result: PASS
Note that, for the purpose of providing a self-contained example, I combined the module code and the test script into a single file.
Also note that I did not clutter the test script with unnecessary print statements.
If you want, you can use diag to show some output:
diag "Checking if Test::Exception::dies_ok will catch die in new";
dies_ok { My::SuperModule->new } "dies ok";
done_testing;
Output:
C:\...\t> prove my.pl
my.pl .. # Checking if Test::Exception::dies_ok will catch die in new
my.pl .. ok
All tests successful.
Files=1, Tests=1, 0 wallclock secs ( 0.05 usr + 0.02 sys = 0.06 CPU)
Result: PASS
As opposed to plain print statements, diag output will actually be shown when tests are run from a harness such as prove.
It works for me. All I needed to do was:
Comment out the two modules that the test wasn't using (Net::Ping and Utils::Variables::Validator - I don't have them installed).
Add 1; as the last line of the module - so it returns a true value.
Remove the extra require and use_ok from the test.
Added done_testing; to the end of the test - so the test harness knows that it got to the end of the tests.
You probably don't care about the first item on that list, but if you fix the others, it should work for you.
I have a working Coro program which I'm writing test scripts for
This is a cut down version of how I'm trying to test
use Test::More ;#tests => 9;
BEGIN{
use_ok( 'EV' ) || print "Bail out!\n";
use_ok( 'Coro' ) || print "Bail out!\n";
use_ok( 'AnyEvent' ) || print "Bail out!\n";
use_ok( 'Coro::AnyEvent' ) || print "Bail out!\n";
}
my #coro;
push #coro, async sub{ok(Coro::AnyEvent::sleep(2), 'sleep')};
push #coro, async sub{ok (1 == 1, 'one equals one')};
push #coro, async sub{isnt (1, 2, 'one does not equal two')};
#push #coro, async sub{is(EV::unloop, undef, 'unloop')};
#EV::run;
map {ok($_->join, 'join')} #coro;
which gives me
t/coro-test.t ..
ok 1 - use EV;
ok 2 - use Coro;
ok 3 - use AnyEvent;
ok 4 - use Coro::AnyEvent;
ok 5 - one equals one
ok 6 - one does not equal two
Undefined subroutine &main:: called at /usr/lib/perl5/Coro/AnyEvent.pm line 218.
Dubious, test returned 9 (wstat 2304, 0x900)
All 6 subtests passed
Test Summary Report
-------------------
t/coro-test.t (Wstat: 2304 Tests: 6 Failed: 0)
Non-zero exit status: 9
Parse errors: No plan found in TAP output
Files=1, Tests=6, 0 wallclock secs ( 0.02 usr 0.01 sys + 0.18 cusr 0.03 csys = 0.24 CPU)
Result: FAIL
My (real) program sets coroutines off then they sleep whist they have nothing to do so its not a contrived example.
Any help gratefully received. (I think the unloop and EV::run aren't required)
I can't imagine a reason why Test::More wouldn't work, and in fact, it works for me with current versions of Coro and AnyEvent (and either EV or Perl as event backends).
I think your problem might be that Coro::AnyEvent::sleep returns something that Test::More doesn't like on your system. Assuming Coro::AnyEvent::sleep returns anything specific is looking for trouble anyway - the return value(s), if any, are undocumented, so expecting it to be something specific makes your program rely on undocumented behaviour, and failure is an expected outcome.
I have a simple test file under t/. I want to display the summary after running all the tests. For some reason, the summary is displayed before the results of "04.pl" is displayed. How do I display the summary after running all the tests?
==> t/test.t
#!/usr/bin/perl
use strict;
use warnings;
use Test::More;
plan tests => 4;
require "/home/mydir/perl5/lib/perl5/TAP/Harness.pm";
require "/home/mydir/perl5/lib/perl5/TAP/Formatter/HTML.pm";
require "/home/mydir/perl5/lib/perl5/TAP/Parser/Aggregator.pm";
my $f = TAP::Formatter::HTML->new;
$f->verbosity(-1);
$f->force_inline_css(0);
my %args = (
formatter => $f,
merge => 1,
);
my $a = TAP::Parser::Aggregator->new;
my $h = TAP::Harness->new(\%args);
my #files = qw[01.pl 02.pl 03.pl 04.pl];
$a->start;
$h->aggregate_tests($a, #files);
$a->stop;
done_testing(4);
my $summary = <<'END_SUMMARY';
Total Tests: %s
Passed: %s
Failed: %s
Unexpectedly succeeded: %s
END_SUMMARY
printf $summary,
scalar $a->total,
scalar $a->passed,
scalar $a->failed,
scalar $a->todo_passed;
==> Output (in html format)
1..4
ok 1 - First
ok 1 - Second
ok 1 - Third
Total Tests: 4
Passed: 4
Failed: 0
Unexpectedly succeeded: 0
ok 1 - Four
To run the tests, I used the 'prove' utility:
/home/mydir/perl5/bin/prove -Q --formatter=TAP::Formatter::HTML > output.html
The following command will run all the .t files in the t/ subdirectory merging the STDOUT and STDERR (-m) and running quietly (-Q).
prove -m -Q --state=last --formatter=TAP::Formatter::HTML t/ > output.html
The simpliest way to execute it in Perl is:
$output = `prove -m -Q --state=last --formatter=TAP::Formatter::HTML t/`;
Take a look at http://perldoc.perl.org/prove.html#--state to see all possibilities of --state parameter.
I have created the following testing class:
package Test::DB;
# $Id$
#
use strict;
use warnings;
our $VERSION = '0.01';
use Carp;
use English '-no_match_vars';
use Test::Most;
use base 'Test::Class';
sub startup : Tests(startup) {
eval {
require esp_libs_db;
esp_libs_db->import;
1;
} or Carp::croak($EVAL_ERROR);
return 1;
}
sub connect : Tests(2) {
can_ok 'esp_libs_db', 'espDbConnect';
my $schema = 'test_schema';
throws_ok { espDbConnect($schema) }
qr/Cannot locate database ($schema) connection file./,
'dies when connection file does not exist';
return 1;
}
1;
When I come to run the test I get the following output:
t/run.t ..
#
# Test::DB->connect
1..2
ok 1 - esp_libs_db->can('espDbConnect')
not ok 2 - dies when connection file does not exist
# Failed test 'dies when connection file does not exist'
# at t/tests/Test/DB.pm line 39.
# (in Test::DB->connect)
# expecting: Regexp ((?-xism:Cannot locate database (test_schema) connection file.))
# found: <FONT color=red size=4 face=arial>ERROR: PM_DB_0004: Cannot locate database (test_schema) connection file.</FONT> at t/tests/Test/DB.pm line 38
# Looks like you failed 1 test of 2.
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/2 subtests
Test Summary Report
-------------------
t/run.t (Wstat: 256 Tests: 2 Failed: 1)
Failed test: 2
Non-zero exit status: 1
Files=1, Tests=2, 3 wallclock secs ( 0.44 usr 0.07 sys + 2.44 cusr 0.43 csys = 3.38 CPU)
Result: FAIL
I cannot see why the second test is failing when it should quite clearly pass
( and ) are special characters in regular expressions, try escaping them:
qr/Cannot locate database \($schema\) connection file./