How does Perl interact with the scripts it is running? - perl

I have a Perl script that runs a different utility (called Radmind, for those interested) that has the capability to edit the filesystem. The Perl script monitors output from this process, so it would be running throughout this whole situation.
What would happen if the utility being run by the script tried to edit the script file itself, that is, replace it with a newer version? Does Perl load the script and any linked libraries at the start of its execution and then ignore the script file itself unless told specifically to mess with it? Or perhaps, would all hell break loose, and executions might or might not fail depending on how the new file differed from the one being run?
Or maybe something else entirely? Apologies if this belongs on SuperUser—seems like a gray area to me.

It's not quite as simple as pavel's answer states, because Perl doesn't actually have a clean division of "first you compile the source, then you run the compiled code"[1], but the basic point stands: Each source file is read from disk in its entirety before any code in that file is compiled or executed and any subsequent changes to the source file will have no effect on the running program unless you specifically instruct perl to re-load the file and execute the new version's code[2].
[1] BEGIN blocks will run code during compilation, while commands such as eval and require will compile additional code at run-time
[2] Most likely by using eval or do, since require and use check whether the file has been loaded already and ignore it if it has.

For a fun demonstration, consider
#! /usr/bin/perl
die "$0: where am I?\n" unless -e $0;
unlink $0 or die "$0: unlink $0: $!\n";
print "$0: deleted!\n";
for (1 .. 5) {
sleep 1;
print "$0: still running!\n";
}
Sample run:
$ ./prog.pl
./prog.pl: deleted!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!

Your Perl script will be compiled first, then run; so changing your script while it runs won't change the running compiled code.
Consider this example:
#!/usr/bin/perl
use strict;
use warnings;
push #ARGV, $0;
$^I = '';
my $foo = 42;
my $bar = 56;
my %switch = (
foo => 'bar',
bar => 'foo',
);
while (<ARGV>) {
s/my \$(foo|bar)/my \$$switch{$1}/;
print;
}
print "\$foo: $foo, \$bar: $bar\n";
and watch the result when run multiple times.

The script file is read once into memory. You can edit the file from another utility after that -- or from the Perl script itself -- if you wish.

As the others said, the script is read into memory, compiled and run. GBacon shows that you can delete the file and it will run. This code below shows that you can change the file and do it and get the new behavior.
use strict;
use warnings;
use English qw<$PROGRAM_NAME>;
open my $ph, '>', $PROGRAM_NAME;
print $ph q[print "!!!!!!\n";];
close $ph;
do $PROGRAM_NAME;
... DON'T DO THIS!!!

Perl scripts are simple text files that are read into memory, compiled in memory, and the text file script is not read again. (Exceptions are modules that come into lexical scope after compilation and do and eval statements in some cases...)
There is a well known utility that exploits this behavior. Look at CPAN and its many versions which is probably in your /usr/bin directory. There is a CPAN version for each version of Perl on your system. CPAN will sense when a new version of CPAN itself is available, ask if you want to install it, and if you say "y" it will download the newer version and respawn itself right where you left off without loosing any data.
The logic of this is not hard to follow. Read /usr/bin/CPAN and then follow the individualized versions related to what $Config::Config{version} would generate on your system.
Cheers.

Related

How to run this simple Perl CGI script on Mac from terminal?

This simple .pl script is supposed to grab all of the images in a directory and output an HTML — that when opened in a browser — displays all of the images in that dir at their natural dimensions.
From the mac command line, I want to just say perl myscript.pl and have it run.
… It used to run on apache in /cgi-bin.
#!/usr/bin/perl -wT
# myscript.pl
use strict;
use CGI;
use Image::Size;
my $q = new CGI;
my $imageDir = "./";
my #images;
opendir DIR, "$imageDir" or die "Can't open $imageDir $!";
#images = grep { /\.(?:png|gif|jpg)$/i } readdir DIR;
closedir DIR;
print $q->header("text/html"),
$q->start_html("Images in $imageDir"),
$q->p("Here are all the images in $imageDir");
foreach my $image (#images) {
my ($width, $height) = imgsize("$image");
print $q->p(
$q->a({-href=>$image},
$q->img({-src=>$image,
-width=>$width,
-height=>$height})
)
);
}
print $q->end_html;
Perl used to include the CGI module in the Standard Library, but it was removed in v5.22 (see The Long Death of CGI.pm). Lots of older code assumed that it would always be there, but now you have to install it yourself:
$ cpan CGI
Perl used to include the CGI module in the Standard Library, but it was removed in v5.22. Lots of older code assumed that it would always be there, but now you have to install it yourself.
The corelist program that comes with Perl is handy for checking these things:
$ corelist CGI
Data for 2020-03-07
CGI was first released with perl 5.004, deprecated (will be CPAN-only) in v5.19.7 and removed from v5.21.0
I handle this sort of thing by using the extract_modules program from my Module::Extract::Use module. Otherwise, I end up installing one module, then run again and discover another one to install, and so on:
$ extract_modules some_script.pl | xargs cpan
There's another interesting point for module writers. For a long time, we'd only list the external prerequisites in Makefile.PL. You should list even the internal ones now that Perl has a precedent for kicking modules out of the Standard Library. Along with that, specify a dependency for any module you actually use rather than relying it being in a particular distribution.
And, I was moving legacy programs around so much that I wrote a small tool, scriptdist to wrap the module infrastructure around single-file programs so I could install them as modules. The big win there is that cpan and similar tools install the prereqs for you. I haven't used it in a long time since I now just start programs as regular Perl distributions.

Shell Programming inside Perl

I am writing a code in perl with embedded shell script in it:
#!/usr/bin/perl
use strict;
use warnings;
our sub main {
my $n;
my $n2=0;
$n=chdir("/home/directory/");
if($n){
print "change directory successful $n \n";
$n2 = system("cd", "section");
system("ls");
print "$n2 \n";
}
else {
print "no success $n \n";
}
print "$n\n";
}
main();
But it doesn't work. When I do the ls. The ls doesn't show new files. Anyone knows another way of doing it. I know I can use chdir(), but that is not the only problem, as I have other commands which I have created, which are simply shell commands put together. So does anyone know how to exactly use cli in perl, so that my compiler will keep the shell script attached to the same process rather than making a new process for each system ... I really don't know what to do.
The edits have been used to improve the question. Please don't mind the edits if the question is clear.
edits: good point made by mob that the system is a single process so it dies everytime. But, What I am trying to do is create a perl script which follows an algorithm which decides the flow of control of the shell script. So how do I make all these shell commands to follow the same process?
system spawns a new process, and any changes made to the environment of the new process are lost when the process exits. So calling system("cd foo") will change the directory to foo inside of a very short-lived process, but won't have any effect on the current process or any future subprocesses.
To do what you want to do (*), combine your commands into a single system call.
$n2 = system("cd section; ls");
You can use Perl's rich quoting features to pass longer sequences of commands, if necessary.
$n2 = system q{cd section
if ls foo ; then
echo we found foo in section
./process foo
else
echo we did not find foo\!
./create_foo > foo
fi};
$n2 = system << "EOSH";
cd section
./process bar > /tmp/log
cd ../sekshun
./process foo >> /tmp/log
if grep -q Error /tmp/log ; then
echo there were errors ...
fi
EOSH
(*) of course there are easy ways to do this entirely in Perl, but let's assume that the OP eventually will need some function only available in an external program
system("cd", "section"); attempts to execute the program cd, but there is no such program on your machine.
There is no such program because each process has its own current work directory, and one process cannot change another process's current work directory. (Programs would malfunction left and right if it was possible.)
It looks like you are attempting to have a Perl program execute a shell script. That requires recreating the shell in Perl. (More specific goals might have simpler solutions.)
What I am trying to do is create a perl script which follows an algorithm which decides the flow of control of the shell script.
Minimal change:
Create a shell script that prompts for instructions/commands. Have your Perl script launch the shell script using Expect and feed it answers/commands.

How can I automatically running a large amount of perl scripts?

I need to run over 100 perl scripts (written by the former employee) on Windows for our system stability testing. Each script has several functions, and each function sends certain of linux commands to our back end system, and get results back. The result is written into a log file (currently each script has one log file). The results are “Success”, “Fail”.
Running these perl scripts one-by-one is killing my time. I am thinking about writing a batch file to automate it, but I have to parse the result files to generate test report. I searched online, and seems several testing frameworks, such as Test::Harness, Test::More, Test::Most are good choices. While based on my understanding, they only take .t file, and our scripts are normal perl scripts (.pl), and not standard perl test script (.t script). If using, say, Test::Harness, should I change all the perl script from .pl to .t, and put them under t folder? How to call my functions in Test::Harness? Can someone suggest a better way to automate the testing process and generate the test report like Test::Harness does? I think an example will be very helpful.
Test::Harness and friends isn't really an appropriate choice for this task, unless you want to modify all 100 of your scripts to emit TAP data instead of a log file.
Why not just write a Perl script to run all your Perl scripts?
use strict;
use warnings;
my $script_dir = "/path/to/dir/full/of/scripts";
opendir my $dh, $script_dir or die "Can't open dir $script_dir: $!";
my #scripts = grep { /\.pl$/ } readdir $dh;
foreach my $script( #scripts ) {
print "Running $script\n";
system 'perl', $script;
}
You could even parallelize this using fork and exec (or Parallel::ForkManager, even better), assuming that makes sense for your system.
One of us is confused here. These (100+) perl scripts aren't unit tests right?
If I'm correct keep reading.
Test::* you mentioned aren't really what you're looking for.
Sounds to me like you just need a main.pl, or a .bat, to run each test.pl.
So it seems you're on the right path. If it's possible to have all tests in the same directory, you can do something like this.
my $tests_directory = "/some/path/test_dir";
opendir my $dh, $tests_directory or die"$!";
my #tests = grep { $_ !~ /^\./{1,2}$/ } readdir $dh;
for my $test (#tests) {
system('perl', $test);
}

How do I avoid the need to wait on and close some software driven from Perl?

I have a folder full of script files. When I run them, a program to which they are native is opened, it does some stuff and a CSV file is generated. I wrote some code that I want to run each script file and produce a bunch of CSV files, one for each script.
What happens is the following: when my Perl application is executed, the software is launched and the first of the scripts is run successfully (a CSV file is created). However, at this step the Perl application waits for me to close the software before I continue. It does this for every script. What can I do to avoid this from happening?
use strict;
use warnings;
use Cwd;
my $dir = cwd();
opendir(DIR, $dir);
my #files= grep(/\.acs$/,readdir(DIR));
$dir=~s/\//\\/g;
chdir $dir;
foreach (#files)
{
print "$_\n";
system ("$_")
}
I think you'll want to fork and exec and waitpid - i.e. set up and run your own process and wait for it to finish on your own.
http://larc.ee.nthu.edu.tw/~cthuang/courses/ee2320/12_process.html
This isn't easy, but unfortunately, you're doing this on an OS that isn't script-friendly. Doing this under Linux or OS X you wouldn't have any troubles like this.
You'll need to figure out if these commands are even available under Windows. You may have to find some similar things that are available under windows if there is no posix compatibility library.
your best choice is to ask the application nicely to close itself after it is done.
for example, the cmd.exe command have parameter /C that does exactly that.
try to run the application with /?, and see if anything useful comes up.
failing that, you can use Win32::Process to create the process and then kill it after you are sure it is done. see the documentation for that.

How can I test that a Perl program compiles from my test suite?

I'm building a regression system (not unit testing) for some Perl scripts.
A core component of the system is
`perl script.pl #params 1>stdoutfile 2>stderrfile`;
However, in the course of actually working on the scripts, they sometimes don't compile(Shock!). But perl itself will execute correctly. However, I don't know how to detect on stderr whether Perl failed to compile (and therefore wrote to stderr), or my script barfed on input (and therefore wrote to stderr).
How do I detect whether a program executed or not, without exhaustively finding Perl error messages and grepping the stderr file?
It might be easiest to do this in two steps:
system('$^X -c script.pl');
if ($? == 0) {
# it compiled, now let's see if it runs
system('$^X script.pl', #params, '1>stdoutfile', '2>stderrfile');
# check $?
}
else {
warn "script.pl didn't compile";
}
Note the use of $^X instead of perl. This is more flexible and robust. It ensures that you're running from the same installation instead of whatever interpreter shows up first in your path. The system call will inherit your environment (including PERL5LIB), so spawning a different version of perl could result in hard-to-diagnose compatibility errors.
When I want to check that a program compiles, I check that it compiles :)
Here's what I put into t/compile.t to run with the rest of my test suite. It stops all testing with the "bail out" if the script does not compile:
use Test::More tests => 1;
my $file = '...';
print "bail out! Script file is missing!" unless -e $file;
my $output = `$^X -c $file 2>&1`;
print "bail out! Script file does not compile!"
unless like( $output, qr/syntax OK$/, 'script compiles' );
Scripts are notoriously hard to test. You have to run them and then scrape their output. You can't unit test their guts... or can you?
#!/usr/bin/perl -w
# Only run if we're the file being executed by Perl
main() if $0 eq __FILE__;
sub main {
...your code here...
}
1;
Now you can load the script like any other library.
#!/usr/bin/perl -w
use Test::More;
require_ok("./script.pl");
You can even run and test main(). Test::Output is handy for capturing the output. You can say local #ARGV to control arguments or you can change main() to take #ARGV as an argument (recommended).
Then you can start splitting main() up into smaller routines which you can easily unit test.
Take a look at the $? variable.
From perldoc perlvar:
The status returned by the last pipe
close, backtick ("``") command,
successful call to wait() or
waitpid(), or from the system()
operator. This is just the 16-bit
status word returned by the
traditional Unix wait() system call
(or else is made up to look like it).
Thus, the exit value of the subprocess
is really ("$? >> 8"), and "$? & 127"
gives which signal, if any, the
process died from, and "$? & 128"
reports whether there was a core dump.
It sounds like you need IPC::Open3.