There is a great Perl module
Test::More that everybody uses for
unit testing. Here is the very simple script t/sample_1.t:
use Test::More tests => 1;
fail('This test fails');
I wanted to write script that does the same thing, but without
Test::More.
I've read several the docs about TAP (test anything protocol) to find out how to write the script. I've read:
Wikipedia article about TAP
TAP specification
Unfortunately the documentation wasn't enough. I had to examine the output of script that uses Test::More to find out that I need to output diagnostics to STDERR (there was nothing about this in the docs).
So, I have written a script that does completely the same things as the script with Test::More script. Here is the listing of t/sample_2.t:
$| = 1;
print "1..1\n";
print "not ok 1 - This test fails\n";
print STDERR "# Failed test 'This test fails'\n";
print STDERR "# at t/sample_1.t line 3.\n";
print STDERR "# Looks like you failed 1 test of 1.\n";
exit 1;
But when using prove these 2 scripts output different things. The line "# Failed test 'This test fails'" in prove is displayed on different lines for different tests. Here is the screenshot:
I've written a test scripts that uses Capture::Tiny to check that STDERR, STDOUT and exit code for both scripts a identical. And the script shows that both scripts output the same things.
I've stored all the test files and a test script at GitHub repo.
My question. How should I write Perl unit test without Test::More to have the same output as with Test::More.
PS If you are interested why I need this. I need this to solve the issue of my Perl module Test::Whitespaces.
While I've got absolutely no frickin idea what's going on, I can get the outputs to match (visually at least) by including the following before any other output to STDERR:
print STDERR "\r";
This makes them match visually when run through prove or plain old perl. However, this is NOT what Test::More is doing.
The TAP you're outputting is per spec; if prove wants to treat it differently from the TAP Test::More is outputting, I'd argue that's a bug (or at least an oddity) in prove. Personally when I've written Test modules, I've always used Test::Builder or wrapped Test::More to output the TAP. Each of these is a core module. This seems to be what the majority of Test modules tend to do.
At last I have found out what is going on.
hobbs has advised me to use Test::Builder. I created test script with
Test::Builder that worked exaclty as the script with Test::More (here
it is).
Then I started examinig source code of Test::Builder to find out why the
source of such behaviour. Here is the part of lib/TB2/Formatter/TAP/Base.pm
file:
# Emit old style comment failure diagnostics
sub _comment_diagnostics {
my($self, $result) = #_;
...
# Start on a new line if we're being output by Test::Harness.
# Makes it easier to read
$self->$out_method("\n") if ($out_method eq 'err') and $ENV{HARNESS_ACTIVE};
$self->$diag_method($msg);
return;
}
So, this is the answer. prove sets up special environment variable
HARNESS_ACTIVE and Test::More and friends puts additional line break symbol
"\n" before any diagnostics that are printed to STDERR.
At last I've created test script that outputs exactly the same as the script
written with Test::More. Source code of the script.
I really don't like this solution. It took me and outher peopler much time to
find out what is going on. I'm sure that the task of pretty output should be
solved in TAP parsers, and not in TAP producers.
=(
Related
It appears that simply putting a say, print, etc into a .t doesn't work. The output is hidden. So when using Test::More and Test::Tester how can I simply print something? I want this so I can play with some code while determining how to test it. note: it's ok if it's sent to stderr or only viewable using verbose. Also I dried using diag but that didn't appear to work just anywhere in the test.
If you run a test script directly, you will see the output of print -- tests are just Perl code. However, if you run your tests using a harness, what you see in the output will be determined by the harness, especially its verbosity level, and by whether you print to STDOUT or STDERR.
For another way to print messages within tests, see Diagnostics in the documentation for Test::More, notably:
diag(...);
note(...);
Experimenting with a script like this will quickly illustrate how things work:
# Example usages:
# perl some_test.t # We see everything in output.
# prove some_test.t # We see only diag() and STDERR.
# prove -v some_test.t # Everything again.
# In some_test.t
use strict;
use warnings;
use Test::More;
pass;
diag("diag()");
note("note()");
print "STDOUT\n";
print STDERR "STDERR\n";
done_testing;
I have pretty big perl script executed quite frequently (from cron).
Most executions require pretty short & simple tests.
How to split single file script into two parts with "part two" compiled based on "part 1" decision?
Considered solution:
using BEGIN{ …; exit if …; } block for trivial test.
two file solution with file_1 using require to compile&execute file_2.
I would prefer single file solution to ease maintenance if the cost is reasonable.
First, you should measure how long the compilation really takes, to see if this "optimization" is even necessary. If it does happen to be, then since you said you'd prefer a one-file solution, one possible solution is using the __DATA__ section for code like so:
use warnings;
use strict;
# measure compliation and execution time
use Time::HiRes qw/ gettimeofday tv_interval /;
my $start;
BEGIN { $start = [gettimeofday] }
INIT { printf "%.06f\n", tv_interval($start) }
END { printf "%.06f\n", tv_interval($start) }
my $condition = 1; # dummy for testing
# conditionally compile and run the code in the DATA section
if ($condition) {
eval do { local $/; <DATA>.'; 1' } or die $#;
}
__DATA__
# ... lots of code here ...
I see two ways of achieving what you want. The simple one would be to divide the script in two parts. The first part will do the simple tests. Then, if you need to do more complicated tests you may "add" the second part. The way to do this is using eval like this:
<first-script.pl>
...
eval `cat second-script.pl`;
if( $# ) {
print STDERR $#, "\n";
die "Errors in the second script.\n";
}
Or using File::Slurp in a more robust way:
eval read_file("second-script.pl", binmode => ':utf8');
Or following #amon suggestion and do:
do "second-script.pl";
Only beware that do is different from eval in this way:
It also differs in that code evaluated with do FILE cannot see lexicals in the enclosing scope; eval STRING does. It's the same, however, in that it does reparse the file every time you call it, so you probably don't want to do this inside a loop.
The eval will execute in the context of the first script, so any variables or initializations will be available to that code.
Related to this, there is this question: Best way to add dynamic code to a perl application, which I asked some time ago (and answered myself with the help of the comments provided and some research.) I took some time to document everything I could think of for anyone (and myself) to refer to.
The second way I see would be to turn your testing script into a daemon and have the crontab bit call this daemon as necessary. The daemon remains alive so any data structures that you may need will remain in memory. On the down side, this will take resources in a continuos way as the daemon process will always be running.
My program runs perl as a process in potentially many places with different scripts which I have inherited and would prefer not to modify needlessly if I can avoid it.
The root problem I'm facing is that my program cannot consume standard output as the perl script is executing unless autoflush is enabled (otherwise it will just get every log message instantly after the perl script has finished).
Therefore, what I'd like to do is to run perl with autoflush enabled by command line argument if possible. Something like this would be ideal:
perl -e "$| = 1" -e "foo.pl"
But obviously that doesn't work.
There is a CPAN module called Devel::Autoflush that does exactly this. You would invoke it from the command line:
perl -MDevel::Autoflush your-script-name-here.pl
...and it sets autoflush mode. Looking at the source code it's pretty easy to see how it works. You could just implement it yourself if you live in a world where CPAN modules are not permitted. Just create a module as follows:
package AutoFlush;
my $orig_fh = select STDOUT;
$| = 1;
select STDERR;
$| = 1;
select $orig_fh;
1;
And then from the command line invoke it just as I described above:
perl -MAutoFlush your-script-name-here.pl
This little example module is almost identical to how Devel::Autoflush does it.
Update: And as TLP correctly points out, the following would be even simpler syntax:
package AutoFlush
STDOUT->autoflush(1);
STDERR->autoflush(1);
1;
This may pull in more code, since the syntax relies on the implicit on-demand upgrading of the STDOUT and STDERR filehandles to IO::Handle objects, but when coding for clarity and programmer efficiency first, this is an obvious improvement.
Is there a Perl module which can test the CGI output of another program? E.g. I have a program
x.cgi
(this program is not in Perl) and I want to run it from program
test_x_cgi.pl
So, e.g. test_x_cgi.pl is something like
#!perl
use IPC::Run3
run3 (("x.cgi"), ...)
So in test_x_cgi.pl I want to automatically check that the output of x.cgi doesn't do stupid things like, e.g. print messages before the HTTP header is fully outputted. In other words, I want to have a kind of "browser" in Perl which processes the output. Before I try to create such a thing myself, is there any module on CPAN which does this?
Please note that x.cgi here is not a Perl script; I am trying to write a test framework for it in Perl. So, specifically, I want to test a string of output for ill-formedness.
Edit: Thanks
I have already written a module which does what I want, so feel free to answer this question for the benefit of other people, but any further answers are academic as far as I'm concerned.
There's CGI::Test, which looks like what you're looking for. It specifically mentions the ability to test non-Perl CGI programs. It hasn't been updated for a while, but neither has the CGI spec.
There is Test::HTTP. I have not used it, but seems to have an interface that fits your requirements.
$test->header_is($header_name, $value [, $description]);
Compares the response header
$header_name with the value $value
using Test::Builder-is>.
$test->header_like($header_name, $regex, [, $description]);
Compares the response header
$header_name with the regex $regex
using Test::Builder-like>.
Look at the examples from chapter 16 from the perl cookbook
16.9. Controlling the Input, Output, and Error of Another Program
It uses IPC::Open3.
Fom perl cookbook, might be modified by me, see below.
Example 16.2
cmd3sel - control all three of kids in, out, and error.
use IPC::Open3;
use IO::Select;
$cmd = "grep vt33 /none/such - /etc/termcap";
my $pid = open3(*CMD_IN, *CMD_OUT, *CMD_ERR, $cmd);
$SIG{CHLD} = sub {
print "REAPER: status $? on $pid\n" if waitpid($pid, 0) > 0
};
#print CMD_IN "test test 1 2 3 \n";
close(CMD_IN);
my $selector = IO::Select->new();
$selector->add(*CMD_ERR, *CMD_OUT);
while (my #ready = $selector->can_read) {
foreach my $fh (#ready) {
if (fileno($fh) == fileno(CMD_ERR)) {print "STDERR: ", scalar <CMD_ERR>}
else {print "STDOUT: ", scalar <CMD_OUT>}
$selector->remove($fh) if eof($fh);
}
}
close(CMD_OUT);
close(CMD_ERR);
If you want to check that the output of x.cgi is properly formatted HTML/XHTML/XML/etc, why not run it through the W3 Validator?
You can download the source and find some way to call it from your Perl test script. Or, you might able to leverage this Perl interface to calling the W3 Validator on the web.
If you want to write a testing framework, I'd suggest taking a look at Test::More from CPAN as a good starting point. It's powerful but fairly easy to use and is definitely going to be better than cobbling something together as a one-off.
I've got a Perl script that needs to execute another Perl script. This second script can be executed directly on the command line, but I need to execute it from within my first program. I'll need to pass it a few parameters that would normally be passed in when it's run standalone (the first script runs periodically, and executes the second script under a certain set of system conditions).
Preliminary Google searches suggest using backticks or a system() call. Are there any other ways to run it? (I'm guessing yes, since it's Perl we're talking about :P ) Which method is preferred if I need to capture output from the invoked program (and, if possible, pipe that output as it executes to stdout as though the second program were invoked directly)?
(Edit: oh, now SO suggests some related questions. This one is close, but not exactly the same as what I'm asking. The second program will likely take an hour or more to run (lots of I/O), so I'm not sure a one-off invocation is the right fit for this.)
You can just do it.
{
local #ARGV = qw<param1 param2 param3>;
do '/home/buddy/myscript.pl';
}
Prevents the overhead of loading in another copy of perl.
The location of your current perl interpreter can be found in the special variable $^X. This is important if perl is not in your path, or if you have multiple perl versions available but which to make sure you're using the same one across the board.
When executing external commands, including other Perl programs, determining if they actually ran can be quite difficult. Inspecting $? can leave lasting mental scars, so I prefer to use IPC::System::Simple (available from the CPAN):
use strict;
use warnings;
use IPC::System::Simple qw(system capture);
# Run a command, wait until it finishes, and make sure it works.
# Output from this program goes directly to STDOUT, and it can take input
# from your STDIN if required.
system($^X, "yourscript.pl", #ARGS);
# Run a command, wait until it finishes, and make sure it works.
# The output of this command is captured into $results.
my $results = capture($^X, "yourscript.pl", #ARGS);
In both of the above examples any arguments you wish to pass to your external program go into #ARGS. The shell is also avoided in both of the above examples, which gives you a small speed advantage, and avoids any unwanted interactions involving shell meta-characters. The above code also expects your second program to return a zero exit value to indicate success; if that's not the case, you can specify an additional first argument of allowable exit values:
# Both of these commands allow an exit value of 0, 1 or 2 to be considered
# a successful execution of the command.
system( [0,1,2], $^X, "yourscript.pl", #ARGS );
# OR
capture( [0,1,2, $^X, "yourscript.pl", #ARGS );
If you have a long-running process and you want to process its data while it's being generated, then you're probably going to need a piped open, or one of the more heavyweight IPC modules from the CPAN.
Having said all that, any time you need to be calling another Perl program from Perl, you may wish to consider if using a module would be a better choice. Starting another program carries quite a few overheads, both in terms of start-up costs, and I/O costs for moving data between processes. It also significantly increases the difficulty of error handling. If you can turn your external program into a module, you may find it simplifies your overall design.
All the best,
Paul
I can think of a few ways to do this. You already mentioned the first two, so I won't go into detail on them.
backticks: $retVal = `perl somePerlScript.pl`;
system() call
eval
The eval can be accomplished by slurping the other file into a string (or a list of strings), then 'eval'ing the strings. Heres a sample:
#!/usr/bin/perl
open PERLFILE, "<somePerlScript.pl";
undef $/; # this allows me to slurp the file, ignoring newlines
my $program = <PERLFILE>;
eval $program;
4 . do: do 'somePerlScript.pl'
You already got good answers to your question, but there's always the posibility to take a different point of view: maybe you should consider refactoring the script that you want to run from the first script. Turn the functionality into a module. Use the module from the first and from the second script.
If you need to asynchronously call your external script -you just want to launch it and not wait for it to finish-, then :
# On Unix systems, either of these will execute and just carry-on
# You can't collect output that way
`myscript.pl &`;
system ('myscript.pl &');
# On Windows systems the equivalent would be
`start myscript.pl`;
system ('start myscript.pl');
# If you just want to execute another script and terminate the current one
exec ('myscript.pl');
Use backticks if you need to capture the output of the command.
Use system if you do not need to capture the output of the command.
TMTOWTDI: so there are other ways too, but those are the two easiest and most likely.
See the perlipc documentation for several options for interprocess communication.
If your first script merely sets up the environment for the second script, you may be looking for exec.
#!/usr/bin/perl
use strict;
open(OUTPUT, "date|") or die "Failed to create process: $!\n";
while (<OUTPUT>)
{
print;
}
close(OUTPUT);
print "Process exited with value " . ($? >> 8) . "\n";
This will start the process date and pipe the output of the command to the OUTPUT filehandle which you can process a line at a time. When the command is finished you can close the output filehandle and retrieve the return value of the process. Replace date with whatever you want.
I wanted to do something like this to offload non-subroutines into an external file to make editing easier. I actually made this into a subroutine. The advantage of this way is that those "my" variables in the external file get declared in the main namespace. If you use 'do' they apparently don't migrate to the main namespace. Note the presentation below doesn't include error handling
sub getcode($) {
my #list;
my $filename = shift;
open (INFILE, "< $filename");
#list = <INFILE>;
close (INFILE);
return \#list;
}
# and to use it:
my $codelist = [];
$codelist = getcode('sourcefile.pl');
eval join ("", #$codelist);