simultanious perl system calls on windows - perl

Trying to find a way to have a perl script run 4 other perl scripts on windows and then once all are done, kick off a 5th script. I have checked out a bunch of things but none seem straight forward. Suggestions welcome. The scripts are going to run on a windows box. scripts 1-4 need to finish first before starting script 5

Accept answers to your other questions
use threads
2.1. kick of the 4 scripts:
my #scripts = qw(... commands ...);
my #jobs = ();
foreach my $script (#scripts) {
my $job = threads->create( sub {
system($script);
});
push #jobs, $job;
}
2.2. Wait for completition
$_->join() foreach #jobs;
2.3. Kick off the last script
Edit
As you indicated that my solution didn't work for you, I fired up my Windoze box, taught me to use this horrible cmd.exe and wrote following test script. It is a bit simplified over over the above solution, but does meet your requirements about sequentiality etc.
#!/usr/bin/perl
use strict; use warnings; use threads;
my #scripts = (
q(echo "script 1 reporting"),
q(perl -e "sleep 2; print qq{hi there! This is script 2 reporting\n}"),
q(echo "script 3 reporting"),
);
my #jobs = map {
threads->create(sub{
system($_);
});
} #scripts;
$_->join foreach #jobs;
print "finished all my jobs\n";
system q(echo "This is the last job");
I used this command to execute the script (on Win7 with Strawberry Perl v5.12.2):
C:\...>perl stackoverflow.pl
And this is the output:
"script 1 reporting"
"script 3 reporting"
hi there! This is script 2 reporting
finished all my jobs
"This is the last job"
So how on earth does this not work? I would very much like to learn circumventing Perl's pitfalls the next time I write a script on a non-GNU system, so please enlighten me about what can go wrong.

From personal experience, using fork() in ActiveState Perl doesn't always run the process sequentially. The threaded simulation of fork() used there seems to start all the processes, get it to a certain point, then run them one at a time. This applies even on multicore CPUs. I think Strawberry Perl is compiled the same way. Also, keep in mind that fork() is still being used for backticks and system(), it's just abstracted away.
If you use Cygwin Perl on Windows, it will run through Cygwin's own fork() call and things will parallelize properly. However, Cygwin is slower in other ways.

use Proc::Background;
my #commands = (
['./Files1.exe ALL'],
['./Files2.exe ALL'],
['./Files3.exe ALL'],
['./Files4.exe ALL'],
);
my #procs = map { Proc::Background->new(#$_) } #commands;
$_->wait for #procs;
system 'echo', 'CSCProc', '--pidsAndExitStatus', map { $_->pid, $_->wait } #procs;
`mergefiles.exe`;

Related

Shell Programming inside Perl

I am writing a code in perl with embedded shell script in it:
#!/usr/bin/perl
use strict;
use warnings;
our sub main {
my $n;
my $n2=0;
$n=chdir("/home/directory/");
if($n){
print "change directory successful $n \n";
$n2 = system("cd", "section");
system("ls");
print "$n2 \n";
}
else {
print "no success $n \n";
}
print "$n\n";
}
main();
But it doesn't work. When I do the ls. The ls doesn't show new files. Anyone knows another way of doing it. I know I can use chdir(), but that is not the only problem, as I have other commands which I have created, which are simply shell commands put together. So does anyone know how to exactly use cli in perl, so that my compiler will keep the shell script attached to the same process rather than making a new process for each system ... I really don't know what to do.
The edits have been used to improve the question. Please don't mind the edits if the question is clear.
edits: good point made by mob that the system is a single process so it dies everytime. But, What I am trying to do is create a perl script which follows an algorithm which decides the flow of control of the shell script. So how do I make all these shell commands to follow the same process?
system spawns a new process, and any changes made to the environment of the new process are lost when the process exits. So calling system("cd foo") will change the directory to foo inside of a very short-lived process, but won't have any effect on the current process or any future subprocesses.
To do what you want to do (*), combine your commands into a single system call.
$n2 = system("cd section; ls");
You can use Perl's rich quoting features to pass longer sequences of commands, if necessary.
$n2 = system q{cd section
if ls foo ; then
echo we found foo in section
./process foo
else
echo we did not find foo\!
./create_foo > foo
fi};
$n2 = system << "EOSH";
cd section
./process bar > /tmp/log
cd ../sekshun
./process foo >> /tmp/log
if grep -q Error /tmp/log ; then
echo there were errors ...
fi
EOSH
(*) of course there are easy ways to do this entirely in Perl, but let's assume that the OP eventually will need some function only available in an external program
system("cd", "section"); attempts to execute the program cd, but there is no such program on your machine.
There is no such program because each process has its own current work directory, and one process cannot change another process's current work directory. (Programs would malfunction left and right if it was possible.)
It looks like you are attempting to have a Perl program execute a shell script. That requires recreating the shell in Perl. (More specific goals might have simpler solutions.)
What I am trying to do is create a perl script which follows an algorithm which decides the flow of control of the shell script.
Minimal change:
Create a shell script that prompts for instructions/commands. Have your Perl script launch the shell script using Expect and feed it answers/commands.

How can I automatically running a large amount of perl scripts?

I need to run over 100 perl scripts (written by the former employee) on Windows for our system stability testing. Each script has several functions, and each function sends certain of linux commands to our back end system, and get results back. The result is written into a log file (currently each script has one log file). The results are “Success”, “Fail”.
Running these perl scripts one-by-one is killing my time. I am thinking about writing a batch file to automate it, but I have to parse the result files to generate test report. I searched online, and seems several testing frameworks, such as Test::Harness, Test::More, Test::Most are good choices. While based on my understanding, they only take .t file, and our scripts are normal perl scripts (.pl), and not standard perl test script (.t script). If using, say, Test::Harness, should I change all the perl script from .pl to .t, and put them under t folder? How to call my functions in Test::Harness? Can someone suggest a better way to automate the testing process and generate the test report like Test::Harness does? I think an example will be very helpful.
Test::Harness and friends isn't really an appropriate choice for this task, unless you want to modify all 100 of your scripts to emit TAP data instead of a log file.
Why not just write a Perl script to run all your Perl scripts?
use strict;
use warnings;
my $script_dir = "/path/to/dir/full/of/scripts";
opendir my $dh, $script_dir or die "Can't open dir $script_dir: $!";
my #scripts = grep { /\.pl$/ } readdir $dh;
foreach my $script( #scripts ) {
print "Running $script\n";
system 'perl', $script;
}
You could even parallelize this using fork and exec (or Parallel::ForkManager, even better), assuming that makes sense for your system.
One of us is confused here. These (100+) perl scripts aren't unit tests right?
If I'm correct keep reading.
Test::* you mentioned aren't really what you're looking for.
Sounds to me like you just need a main.pl, or a .bat, to run each test.pl.
So it seems you're on the right path. If it's possible to have all tests in the same directory, you can do something like this.
my $tests_directory = "/some/path/test_dir";
opendir my $dh, $tests_directory or die"$!";
my #tests = grep { $_ !~ /^\./{1,2}$/ } readdir $dh;
for my $test (#tests) {
system('perl', $test);
}

How can I compile a Perl script inside a running Perl session?

I have a Perl script that takes user input and creates another script that will be run at a later date. I'm currently going through and writing tests for these scripts and one of the tests that I would like to perform is checking if the generated script compiles successfully (e.g. perl -c <script>.) Is there a way that I can have Perl perform a compile on the generated script without having to spawn another Perl process? I've tried searching for answers, but searches just turn up information about compiling Perl scripts into executable programs.
Compiling a script has a lot of side-effects. It results in subs being defined. It results in modules being executed. etc. If you simply want to test whether something compiles, you want a separate interpreter. It's the only way to be sure that one testing one script doesn't cause later tests to give false positives or false negatives.
To execute dynamically generated code, use eval function:
my $script = join /\n/, <main::DATA>;
eval($script); # 3
__DATA__
my $a = 1;
my $b = 2;
print $a+$b, "\n";
However if you want to just compile or check syntax, then you will not be able to do it within same Perl session.
Function syntax_ok from library Test::Strict run a syntax check by running perl -c with an external perl interpreter, so I assume there is no internal way.
Only work-around that may work for you would be:
my $script = join /\n/, <main::DATA>;
eval('return;' . $script);
warn $# if $#; # syntax error at (eval 1) line 3, near "1
# my "
__DATA__
my $a = 1
my $b = 2;
print $a+$b, "\n";
In this case, you will be able to check for compilation error(s) using $#, however because the first line of the code is return;, it will not execute.
Note: Thanks to user mob for helpfull chat and code correction.
Won't something like this work for you ?
open(FILE,"perl -c generated_script.pl 2>&1 |");
#output=<FILE>;
if(join('',#output)=~/syntax OK/)
{
printf("No Problem\n");
}
close(FILE);
See Test::Compile module, particularly pl_file_ok() function.

How do I avoid the need to wait on and close some software driven from Perl?

I have a folder full of script files. When I run them, a program to which they are native is opened, it does some stuff and a CSV file is generated. I wrote some code that I want to run each script file and produce a bunch of CSV files, one for each script.
What happens is the following: when my Perl application is executed, the software is launched and the first of the scripts is run successfully (a CSV file is created). However, at this step the Perl application waits for me to close the software before I continue. It does this for every script. What can I do to avoid this from happening?
use strict;
use warnings;
use Cwd;
my $dir = cwd();
opendir(DIR, $dir);
my #files= grep(/\.acs$/,readdir(DIR));
$dir=~s/\//\\/g;
chdir $dir;
foreach (#files)
{
print "$_\n";
system ("$_")
}
I think you'll want to fork and exec and waitpid - i.e. set up and run your own process and wait for it to finish on your own.
http://larc.ee.nthu.edu.tw/~cthuang/courses/ee2320/12_process.html
This isn't easy, but unfortunately, you're doing this on an OS that isn't script-friendly. Doing this under Linux or OS X you wouldn't have any troubles like this.
You'll need to figure out if these commands are even available under Windows. You may have to find some similar things that are available under windows if there is no posix compatibility library.
your best choice is to ask the application nicely to close itself after it is done.
for example, the cmd.exe command have parameter /C that does exactly that.
try to run the application with /?, and see if anything useful comes up.
failing that, you can use Win32::Process to create the process and then kill it after you are sure it is done. see the documentation for that.

How can I test that a Perl program compiles from my test suite?

I'm building a regression system (not unit testing) for some Perl scripts.
A core component of the system is
`perl script.pl #params 1>stdoutfile 2>stderrfile`;
However, in the course of actually working on the scripts, they sometimes don't compile(Shock!). But perl itself will execute correctly. However, I don't know how to detect on stderr whether Perl failed to compile (and therefore wrote to stderr), or my script barfed on input (and therefore wrote to stderr).
How do I detect whether a program executed or not, without exhaustively finding Perl error messages and grepping the stderr file?
It might be easiest to do this in two steps:
system('$^X -c script.pl');
if ($? == 0) {
# it compiled, now let's see if it runs
system('$^X script.pl', #params, '1>stdoutfile', '2>stderrfile');
# check $?
}
else {
warn "script.pl didn't compile";
}
Note the use of $^X instead of perl. This is more flexible and robust. It ensures that you're running from the same installation instead of whatever interpreter shows up first in your path. The system call will inherit your environment (including PERL5LIB), so spawning a different version of perl could result in hard-to-diagnose compatibility errors.
When I want to check that a program compiles, I check that it compiles :)
Here's what I put into t/compile.t to run with the rest of my test suite. It stops all testing with the "bail out" if the script does not compile:
use Test::More tests => 1;
my $file = '...';
print "bail out! Script file is missing!" unless -e $file;
my $output = `$^X -c $file 2>&1`;
print "bail out! Script file does not compile!"
unless like( $output, qr/syntax OK$/, 'script compiles' );
Scripts are notoriously hard to test. You have to run them and then scrape their output. You can't unit test their guts... or can you?
#!/usr/bin/perl -w
# Only run if we're the file being executed by Perl
main() if $0 eq __FILE__;
sub main {
...your code here...
}
1;
Now you can load the script like any other library.
#!/usr/bin/perl -w
use Test::More;
require_ok("./script.pl");
You can even run and test main(). Test::Output is handy for capturing the output. You can say local #ARGV to control arguments or you can change main() to take #ARGV as an argument (recommended).
Then you can start splitting main() up into smaller routines which you can easily unit test.
Take a look at the $? variable.
From perldoc perlvar:
The status returned by the last pipe
close, backtick ("``") command,
successful call to wait() or
waitpid(), or from the system()
operator. This is just the 16-bit
status word returned by the
traditional Unix wait() system call
(or else is made up to look like it).
Thus, the exit value of the subprocess
is really ("$? >> 8"), and "$? & 127"
gives which signal, if any, the
process died from, and "$? & 128"
reports whether there was a core dump.
It sounds like you need IPC::Open3.