How do I avoid the need to wait on and close some software driven from Perl? - perl

I have a folder full of script files. When I run them, a program to which they are native is opened, it does some stuff and a CSV file is generated. I wrote some code that I want to run each script file and produce a bunch of CSV files, one for each script.
What happens is the following: when my Perl application is executed, the software is launched and the first of the scripts is run successfully (a CSV file is created). However, at this step the Perl application waits for me to close the software before I continue. It does this for every script. What can I do to avoid this from happening?
use strict;
use warnings;
use Cwd;
my $dir = cwd();
opendir(DIR, $dir);
my #files= grep(/\.acs$/,readdir(DIR));
$dir=~s/\//\\/g;
chdir $dir;
foreach (#files)
{
print "$_\n";
system ("$_")
}

I think you'll want to fork and exec and waitpid - i.e. set up and run your own process and wait for it to finish on your own.
http://larc.ee.nthu.edu.tw/~cthuang/courses/ee2320/12_process.html
This isn't easy, but unfortunately, you're doing this on an OS that isn't script-friendly. Doing this under Linux or OS X you wouldn't have any troubles like this.
You'll need to figure out if these commands are even available under Windows. You may have to find some similar things that are available under windows if there is no posix compatibility library.

your best choice is to ask the application nicely to close itself after it is done.
for example, the cmd.exe command have parameter /C that does exactly that.
try to run the application with /?, and see if anything useful comes up.
failing that, you can use Win32::Process to create the process and then kill it after you are sure it is done. see the documentation for that.

Related

Persistant effects of modifying process environment via system

I am making a few calls to the system, mainly cd commands, as certain functions need to called from certain directories on my system. However, I have noticed that once a call is finished, the effects of that call are lost.
For example, lets say that I start in /home/project and then call:
system("setenv home/project/env/NeededEnvironment");
system("make cfile.o");
The second system call doesn’t know about the first call setting the environment needed for the file to compile. I have tried putting them into one system call separated by ; as well, but have the same problem. Is there anyway to get the effect of the first call to be saved?
That is how system works: it creates a subshell to execute your command, and when the command is complete, the subshell exits leaving your perl process unaffected.
Section 8 of the Perl FAQ also answers this question.
I {changed directory, modified my environment} in a perl script. How come the change disappeared when I exited the script? How do I get my changes to be visible?
Unix
In the strictest sense, it can't be done—the script executes as a different process from the shell it was started from. Changes to a process are not reflected in its parent—only in any children created after the change. There is shell magic that may allow you to fake it by eval()ing the script's output in your shell; check out the comp.unix.questions FAQ for details.
You want code along the lines of
system("cd /home/project/env/NeededEnvironment && make cfile.o") == 0
or warn "$0: make failed";
or use the -C option to make and avoid shell argument parsing as in
system("make", "-C", "/home/project/env/NeededEnvironment", "cfile.o") == 0
or warn "$0: make failed";
If you are writing a Perl script, use Perl itself and shell-out as rarely as possible.
If you need to change your directory:
chdir 'some/other/dir';
If you need to set an environment variable:
$ENV{ SOME_VAR } = 'Some value';
Update
Here are some more commands where the shell equivalent should not be used:
mkdir
unlink
rmdir
Modules everyone should know about:
File::Copy
File::Path
File::Basename
File::Spec

How can I automatically running a large amount of perl scripts?

I need to run over 100 perl scripts (written by the former employee) on Windows for our system stability testing. Each script has several functions, and each function sends certain of linux commands to our back end system, and get results back. The result is written into a log file (currently each script has one log file). The results are “Success”, “Fail”.
Running these perl scripts one-by-one is killing my time. I am thinking about writing a batch file to automate it, but I have to parse the result files to generate test report. I searched online, and seems several testing frameworks, such as Test::Harness, Test::More, Test::Most are good choices. While based on my understanding, they only take .t file, and our scripts are normal perl scripts (.pl), and not standard perl test script (.t script). If using, say, Test::Harness, should I change all the perl script from .pl to .t, and put them under t folder? How to call my functions in Test::Harness? Can someone suggest a better way to automate the testing process and generate the test report like Test::Harness does? I think an example will be very helpful.
Test::Harness and friends isn't really an appropriate choice for this task, unless you want to modify all 100 of your scripts to emit TAP data instead of a log file.
Why not just write a Perl script to run all your Perl scripts?
use strict;
use warnings;
my $script_dir = "/path/to/dir/full/of/scripts";
opendir my $dh, $script_dir or die "Can't open dir $script_dir: $!";
my #scripts = grep { /\.pl$/ } readdir $dh;
foreach my $script( #scripts ) {
print "Running $script\n";
system 'perl', $script;
}
You could even parallelize this using fork and exec (or Parallel::ForkManager, even better), assuming that makes sense for your system.
One of us is confused here. These (100+) perl scripts aren't unit tests right?
If I'm correct keep reading.
Test::* you mentioned aren't really what you're looking for.
Sounds to me like you just need a main.pl, or a .bat, to run each test.pl.
So it seems you're on the right path. If it's possible to have all tests in the same directory, you can do something like this.
my $tests_directory = "/some/path/test_dir";
opendir my $dh, $tests_directory or die"$!";
my #tests = grep { $_ !~ /^\./{1,2}$/ } readdir $dh;
for my $test (#tests) {
system('perl', $test);
}

Perl: flock() works on Linux, ignores previous lock on AIX

In a nutshell: wrote a Perl script using flock(). On Linux, it behaves as expected. On AIX, flock() always returns 1, even though another instance of the script, using flock(), should be holding an exclusive lock on the lockfile.
We ship a Bash script to restart our program, relying on flock(1) to prevent simultaneous restarts from making multiple processes. Recently we deployed on AIX, where flock(1) doesn't come by default and won't be provided by the admins. Hoping to keep things simple, I wrote a Perl script called flock, like this:
#!/usr/bin/perl
use Fcntl ':flock';
use Getopt::Std 'getopts';
getopts("nu:x:");
%switches = (LOCK_EX => $opt_x, LOCK_UN => $opt_u, LOCK_NB => $opt_n);
my $lockFlags = 0;
foreach $key (keys %switches) {
if($switches{$key}) {$lockFlags |= eval($key)};
}
$fileDesc = $opt_x || $opt_u;
open(my $lockFile, ">&=$fileDesc") || die "Can't open file descriptor: $!";
flock($lockFile, $lockFlags) || die "Can't change lock - $!\n";;
I tested the script by running (flock -n -x 200; sleep 60)200>lockfile twice, nearly simultaneously, from two terminal tabs.
On Linux, the second run dies with "Resource temporarily unavailable", as expected.
On AIX, the second run acquires the lock, with flock() returning 1, as most definitely not expected.
I understand the flock() is implemented differently on the two systems, the Linux version using flock(1) and the AIX one using, I think, fcntl(1). I don't have enough expertise to understand how this causes my problem, and how to solve it.
Many thanks for any advice.
This isn't anything to do with AIX, the open() call in your script is incorrect.
Should should be something like:
open (my $lockfile, ">>", $fileDesc) # for LOCK_EX, must be write
You were using the "dup() previously opened file handle" syntax with >&=, but the script had not opened any files to duplicate, nor should it.
My quick tests shows the correct behavior (debugging added)
first window:
$ ./flock.pl -n -x lockfile
opened lockfile
locked
second window:
$./flock.pl -n -x lockfile
opened lockfile
Can't change lock - Resource temporarily unavailable
$
It's not about different commands, I suppose; it's more about global differences between AIX and Linux.
In POSIX systems, file locks are advisory: each program could check the file's state and then reconsider what has to be done with it. No explicit checks = no locking.
In Linux systems, however, one can try to enforce a mandatory lock, although the doc itself states that it would be unwise to rely on it: implementation is (and probably will ever be) buggy.
Therefore, I suggest implementing such checks of advisory flags within the script itself.
More about it: man 2 fcntl, man 2 flock.

How does Perl interact with the scripts it is running?

I have a Perl script that runs a different utility (called Radmind, for those interested) that has the capability to edit the filesystem. The Perl script monitors output from this process, so it would be running throughout this whole situation.
What would happen if the utility being run by the script tried to edit the script file itself, that is, replace it with a newer version? Does Perl load the script and any linked libraries at the start of its execution and then ignore the script file itself unless told specifically to mess with it? Or perhaps, would all hell break loose, and executions might or might not fail depending on how the new file differed from the one being run?
Or maybe something else entirely? Apologies if this belongs on SuperUser—seems like a gray area to me.
It's not quite as simple as pavel's answer states, because Perl doesn't actually have a clean division of "first you compile the source, then you run the compiled code"[1], but the basic point stands: Each source file is read from disk in its entirety before any code in that file is compiled or executed and any subsequent changes to the source file will have no effect on the running program unless you specifically instruct perl to re-load the file and execute the new version's code[2].
[1] BEGIN blocks will run code during compilation, while commands such as eval and require will compile additional code at run-time
[2] Most likely by using eval or do, since require and use check whether the file has been loaded already and ignore it if it has.
For a fun demonstration, consider
#! /usr/bin/perl
die "$0: where am I?\n" unless -e $0;
unlink $0 or die "$0: unlink $0: $!\n";
print "$0: deleted!\n";
for (1 .. 5) {
sleep 1;
print "$0: still running!\n";
}
Sample run:
$ ./prog.pl
./prog.pl: deleted!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
Your Perl script will be compiled first, then run; so changing your script while it runs won't change the running compiled code.
Consider this example:
#!/usr/bin/perl
use strict;
use warnings;
push #ARGV, $0;
$^I = '';
my $foo = 42;
my $bar = 56;
my %switch = (
foo => 'bar',
bar => 'foo',
);
while (<ARGV>) {
s/my \$(foo|bar)/my \$$switch{$1}/;
print;
}
print "\$foo: $foo, \$bar: $bar\n";
and watch the result when run multiple times.
The script file is read once into memory. You can edit the file from another utility after that -- or from the Perl script itself -- if you wish.
As the others said, the script is read into memory, compiled and run. GBacon shows that you can delete the file and it will run. This code below shows that you can change the file and do it and get the new behavior.
use strict;
use warnings;
use English qw<$PROGRAM_NAME>;
open my $ph, '>', $PROGRAM_NAME;
print $ph q[print "!!!!!!\n";];
close $ph;
do $PROGRAM_NAME;
... DON'T DO THIS!!!
Perl scripts are simple text files that are read into memory, compiled in memory, and the text file script is not read again. (Exceptions are modules that come into lexical scope after compilation and do and eval statements in some cases...)
There is a well known utility that exploits this behavior. Look at CPAN and its many versions which is probably in your /usr/bin directory. There is a CPAN version for each version of Perl on your system. CPAN will sense when a new version of CPAN itself is available, ask if you want to install it, and if you say "y" it will download the newer version and respawn itself right where you left off without loosing any data.
The logic of this is not hard to follow. Read /usr/bin/CPAN and then follow the individualized versions related to what $Config::Config{version} would generate on your system.
Cheers.

How can I run a shell script from inside a Perl script run by cron?

Is it possible to run Perl script (vas.pl) with shell sript inside (date.sh & backlog.sh) in cron or vice versa?
Thanks.
0 19 * * * /opt/perl/bin/perl /reports/daily/scripts/vas_rpt/vasCIO.pl 2> /reports/daily/scripts/vas_rpt/vasCIO.err
Error encountered:
date.sh: not found
backlog.sh: not found
Perl script:
#!/opt/perl/bin/perl
system("sh date.sh");
open(FH,"/reports/daily/scripts/vas_rpt/date.txt");
#date = <FH>;
close FH;
open(FH,"/reports/daily/scripts/vas_rpt/$cat1.txt");
#array = <FH>;
system("sh backlog.sh $date[0] $array[0]");
close FH;
cron runs your perl script in a different working directory than your current working directory. Use the full path of your script file:
# I'm assuming your shell script reside in the same
# dir as your perl script:
system("sh /reports/daily/scripts/date.sh");
Or if your're allergic to hardcoding paths like I am you can use the FindBin package from CPAN:
use FindBin qw($Bin);
system("sh $Bin/date.sh");
If your shell script also needs to start in the correct path then it's probably better to first change your working directory:
use FindBin qw($Bin);
chdir $Bin;
system("sh date.sh");
You can do what you want as long as you are careful.
The first thing to remember with cron jobs is that you get almost no environment set.
The chances are, the current directory is / or perhaps $HOME. And the value of $PATH is minimal - your profile has not been run, for example.
So, your script didn't find 'date.sh' because it wasn't in the correct directory.
To get the data from the shell script into your program, you need to pipe it there - or arrange for the 'date.sh' to dump the data into the file successfully. Of course, Perl has built-in date and time handling, so you don't need to use the shell for it.
You also did not run with use warnings; or use strict; which would also help you. For example, $cat1 is not a defined variable.
Personally, I run a simple shell script from cron and let it deal with all the complexities; I don't use I/O redirection in the crontab file. That's partly a legacy of working on ancient systems - but it also leads to portable and reliable running of cron jobs.
It's possible. Just keep in mind that your working directory when running under cron may not be what you think it is - it's the value in your HOME environment variable, or that specified in the /etc/passwd file. Consider fully qualifying the path to your .shes.
There are a lot of things that need care in your script, and I talk about most of them in the "Secure Programming Techniques" chapter of Mastering Perl. You can also find some of it in perlsec/
Since you are taking external data and passing them to other external programs, you should use taint checking to ensure that the data are what you expect. What if someone were able to sneak something extra into those files?
When you want to pass data to external programs, use system in the list form so the shell doesn't get a chance to interpret possible meta-characters.
Instead of relying on the PATH to find the programs that you expect to run, specify their full paths explicitly to ensure you are at least running the file you think you are (and not something someone snuck into a directory that is earlier in PATH). If you were really paranoid (like taint checking is), you might also check that those files and directories had suitable permissions (e.g., not world-writeable).
Just as a bonus note, if you only want one line from a filehandle, you can use the line-input operator in scalar context:
my $date = <$fh>;
You probably want to chomp the data too to get rid of possible ending newlines. Even if you don't think a terminating newline should be there because another program created the file, someone looking at the file with a text editor might add it.
Good luck, :)