Calling one Perl program from another - perl

I have two Perl files and I want to call one file from another with arguments
First file a.pl
$OUTFILE = "C://programs/perls/$ARGV[0]";
# this should be some out file created inside work like C://programs/perls/abc.log
Second File abc.pl
require "a.pl" "abc.log";
# $OUTFILE is a variable inside a.pl and want to append current file's name as log.
I want it to create an output file with the name of log as that of current file.
One more constraint I have is to use $OUTFILE in both a.pl and abc.pl.
If there is any better approach please suggest.

The require keyword only takes one argument. That's either a file name or a package name. Your line
require "a.pl" "abc.log";
is wrong. It gives a syntax error along the lines of String found where operator expected.
You can require one .pl file from another .pl, but that is very old-fashioned, badly written Perl code.
If neither file defines a package then the code is implicitly placed in the main package. You can declare a package variable in the outside file and use it in the one that is required.
In abc.pl:
use strict;
use warnings;
# declare a package variable
our $OUTFILE = "C://programs/perls/filename";
# load and execute the other program
require 'a.pl';
And in a.pl:
use strict;
use warnings;
# do something with $OUTFILE, like use it to open a file handle
print $OUTFILE;
If you run this, it will print
C://programs/perls/filename

You should convert your perl file you want to call to a perl module:
Hello.pm
#!/usr/bin/perl
package Hello;
use strict;
use warnings;
sub printHello {
print "Hello $_[0]\n"
}
1;
Then you can call it:
test.pl
#!/usr/bin/perl
use strict;
use warnings;
# you have to put the current directory to the module search path
use lib (".");
use Hello;
Hello::printHello("a");
I tested it in git bash on windows, maybe you have to do some modifications in your environment.
In this way you can pass as many arguments as you would like to, and you don't have to look for the variables you are using and maybe not initialized (this is a less safe approach I think, e.g. sometimes you will delete something you did't really want) somewhere in the file you want to call. The disadvantage is that you need to learn a bit about perl modules but I think it definitely worths.
A second approach could be to use the exec/system call (you can pass arguments in this way too; if forking a child process is acceptable), but that is an another story.

I would do this another way. Have the program take the name of the log file as a command-line parameter:
% perl a.pl name-of-log-file
Inside a.pl, open that file to append to it then output whatever you like. Now you can run it from many other sorts of places besides another Perl program.
# a.pl
my $log_file = $ARGV[0] // 'default_log_name';
open my $fh, '>>:utf8', $log_file or die ...;
print { $fh } $stuff_to_output;
But, you could also call if from another Perl program. The $^X is the path to the currently running perl and this uses system in the slightly-safer list form:
system $^X, 'a.pl', $name_of_log_file
How you get something into $name_of_log_file is up to you. In your example you already knew the value in your first program.

Related

In Perl, why do I get "undefined subroutine" in a perl module but not in main ?

I'm getting an "undefined subroutine" for sub2 in the code below but not for sub1.
This is the perl script (try.pl)...
#!/usr/bin/env perl
use strict;
use IO::CaptureOutput qw(capture_exec_combined);
use FindBin qw($Bin);
use lib "$Bin";
use try_common;
print "Running try.pl\n";
sub1("echo \"in sub1\"");
sub2("echo \"in sub2\"");
exit;
sub sub1 {
(my $cmd) = #_;
print "Executing... \"${cmd}\"\n";
my ($stdouterr, $success, $exit_code) = capture_exec_combined($cmd);
print "${stdouterr}\n";
return;
}
This is try_common.pm...
#! /usr/bin/env perl
use strict;
use IO::CaptureOutput qw(capture_exec_combined);
package try_common;
use Exporter;
our #ISA = qw(Exporter);
our #EXPORT = qw(
sub2
);
sub sub2 {
(my $cmd) = #_;
print "Executing... \"${cmd}\"\n";
my ($stdouterr, $success, $exit_code) = capture_exec_combined($cmd);
print "${stdouterr}\n";
return;
}
1;
When I run try.pl I get...
% ./try.pl
Running try.pl
Executing... "echo "in sub1""
in sub1
Executing... "echo "in sub2""
Undefined subroutine &try_common::capture_exec_combined called at
/home/me/PERL/try_common.pm line 20.
This looks like some kind of scoping issue because if I cut/paste the "use IO::CaptureOutput qw(capture_exec_combined);" as the first line of sub2, it works. This is not necessary in the try.pl (it runs sub1 OK), but a problem in the perl module. Hmmmm....
Thanks in Advance for any help!
You imported capture_exec_combined by the use clause before declaring the package, so it was imported into the main package, not the try_common. Move the package declaration further up.
You should take a look at the perlmod document to understand how modules work. In short:
When you use package A (in Perl 5), you change the namespace of the following code to A, and all global symbol (e.g. subroutine) definitions after that point will go into that package. Subroutines inside a scope need not be exported and may be used preceded by their scope name: A::function. This you seem to have found.
Perl uses package as a way to create modules and split code in different files, but also as the basis for its object orientation features.
Most of the times, modules are handled by a special core module called Exporter. See Exporter. This module uses some variables to know what to do, like #EXPORT, #EXPORT_OK or #ISA. The first defines the names that should be exported by default when you include the module with use Module. The second defines the names that may be exported (but need to be mentioned with use Module qw(name1 name2). The last tells in an object oriented fashion what your module is. If you don't care about object orientation, your module typically "is a" Exporter.
Also, as stated in another answer, when you define a module, the package module declaration should be the first thing to be in the file so anything after it will be under that scope.
I hate when I make this mistake although I don't make it much anymore. There are two habits you can develop:
Most likely, make the entire file the package. The first lines will be the package statement and no other package statements show up in the file.
Or, use the new PACKAGE BLOCK syntax and put everything for that package inside the block. I do this for small classes that I might need only locally:
package Foo {
# everything including use statements go in this block
}
I think I figured it out. If, in the perl module, I prefix the "capture_exec_combined" with "::", it works.
Still, why isn't this needed in the main, try.pl ?

Finding the standard out for a perl program

I'm redirecting standard out for a perl program. Example:
perl run_program.pl > /log/run_program.log
Is there a way to know what the standard out is. So in this case I'm looking to have the value of '/log/run_program.log'.
If it's not possible is there another/better way to get the same result?
Thanks in advance!
EDIT: The reason I'm not setting STDOUT in the program is because I'm calling a bunch of .pm that have print lines that I want to go to STDOUT with out having to pass the file to it.
On my system, you can use
readlink("/proc/$$/fd/1")
EDIT: The reason I'm not setting STDOUT in the program is because I'm calling a bunch of .pm that have print lines that I want to go to STDOUT with out having to pass the file to it.
Just to let you know, you might be able to use the select command to redefine the FD for the default output:
use strict;
use warnings;
use autodie;
open my $output_fd, ">", "/log/run_program.log";
my $old_default_fd = select( $output_fd );
print "I'm now going into /log/run_program.log\n";
select ($old_default_fd; # Restore the default when you no longer need it
This may work with most of your Perl modules. Just hope that they're not doing something stupid like:
print STDOUT "Ha, ha. I'm still going to STDOUT.\n".
I hate it when Perl modules print stuff.
<soapbox>
To you Perl Module writers:
Perl modules should not be printing (unless that's their main purpose). You should instead return what you want to print and let the caller decide what to do with the output.
</soapbox>
For the first part of your question, no. There's no way for the perl program to know where STDOUT is directed to.
The redirection happens external to the program, and is "wired up" before the perl process even starts. STDOUT could be pointed to a device, a file, or another process (a pipe).
The whole purpose of redirection from stdout to a file is to adapt a program which typically writes to stdout and redirect it to a file. The OS doesn't give you the name of the file, because it figures your program is too stupid to know what to do with a file name.
So your best bet is to get it as my $file_name = shift; and open it yourself. (A shift in the mainline pulls from #ARGV.)
Give a chance to this ideas:
...
my $log_path = "/log/run_program.log"; # or using $0 in some manner
open $log_handler, "<", $log_path or die;
...
Now you could code a myprint subroutine that will call print $log_handler and use it into the whole program, or better, having a look to OVERRIDING CORE FUNCTIONS you could self redefine print doing like this:
...
use subs 'print';
sub print { #redefine here }
...

get caller module code in perl

How could I get actual code from the caller module in perl?
some_module.pm
package some_module;
use my_module;
CODE
CODE
my_module.pm
package my_module;
sub import {
my $package = caller;
my $code = actual perl code of some_module.pm;
}
Is this possible, or would I have to use an open function? I would think source filters do something similar.
There's a CPAN module called B::Hooks::Parser that allows you to not just see, but alter the line where you were called. (That is, not alter it on disk, but alter what Perl sees while it's parsing the line.) Though you cannot see or alter the part of the line which has already been compiled. This only works at compile time of course, and because the Perl parser reads and tokenizes one line at a time, it is limited to looking at a single line.
If you need to see the entire file that's called you, you can use:
open my $caller_fh, '<', (caller)[1]
or die("Cannot open caller: $!");
However (caller)[1] might not always return a real filename - for example, if you are called from a one-liner, it will be "-e", or from a stringy eval will be something like "(eval 23)".

How to pass whole file to perl script?

I want to have a Perl script that receives a file and do some computation based on it.
Here is my try:
Perl.pl
#!/usr/bin/perl
use strict;
use warnings;
my $book = <STDIN>;
print $book;
Here is my execution of the script:
./Perl.pl < textFile
My script only prints the first line of textFile. Who can I load all textFile into my variable $book?
I want the file to be passed in that way, I do not want to use Perl's open(...)
Assigning a value from a file handle to a scalar pulls it one line at a time.
You can either:
use a while loop to append the lines one by one until there are none left or
set $/ (to undef) to change your script's idea of what constitutes a line. There is an example of the latter in perldoc perlvar (read it as it explains best practises for changing it).
Also you can use Path::Class for easy. It is a wrapper for many file manipulation modules.
For your purpose:
#! /usr/bin/perl
use Path::Class qw/file/;
my $file = file(shift #ARGV);
print $file->slurp;
You can run it by:
./slurp.pl textFile
The answer you're looking for is in the Perl FAQ.
How can I read in an entire file all at once?

Why am I unable to load a Perl library when using the `do` function?

I'm new to Perl, and I'm updating an old Perl website. Every .pl file seems to have this line at the top:
do "func.inc";
So I figured I could use this file to tag on a subroutine for global use.
func.inc
#!/usr/bin/perl
sub foobar
{
return "Hello world";
}
index.pl
#!/usr/bin/perl
do "func.inc";
print "Content-type: text/html\n\n";
print foobar();
However, I get this error:
Undefined subroutine &main::foobar called at /path/to/index.pl line 4.
Both files are in the same directory, and there's tones of subs in func.inc already which are used throughout the website. However, the script works in the Linux production environment, but does not work for my Windows 7 dev environment (I'm using ActivePerl).
Update:
It looks like the file is not being included; the sub works if the file is included using an absolute path...
do "C:/path/to/func.inc";
... so it looks like relative paths don't work for my local dev environment, but they work in the production environment through. But this is no good for me, because the absolute path on my dev machine will not work for the live server.
How do I get do to work using a relative path on my Windows 7 dev machine?
Update 2:
I was using the Perl -T switch. Unfortunately this removes "." from #INC, and so stops us from using relative paths for do. I removed this switch and the old code is working now. I'm aware that this is not good practice, but unfortunately I'm working with old code, so it seems that I have no choice.
The perlfunc documentation for do reads
do EXPR
Uses the value of EXPR as a filename and executes the contents of the file as a Perl script.
do 'stat.pl';
is just like
eval `cat stat.pl`;
except that it's more efficient and concise, keeps track of the current filename for error messages, searches the #INC directories, and updates %INC if the file is found.
So to see all this in action, say C:\Cygwin\tmp\mylib\func.inc looks like
sub hello {
print "Hello, world!\n";
}
1;
and we make use of it in the following program:
#!/usr/bin/perl
use warnings;
use strict;
# your code may have unshift #INC, ...
use lib "C:/Cygwin/tmp/mylib";
my $func = "func.inc";
do $func;
# Now we can just call it. Note that with strict subs enabled,
# we have to use parentheses. We could also predeclare with
# use subs qw/ hello /;
hello();
# do places func.inc's location in %INC
if ($INC{$func}) {
print "$0: $func found at $INC{$func}\n";
}
else {
die "$0: $func missing from %INC!";
}
Its output is
Hello, world!
./prog: func.inc found at C:/Cygwin/tmp/mylib/func.inc
As you've observed, do ain't always no crystal stair, which the do documentation explains:
If do cannot read the file, it returns undef and sets $! to the error. If do can read the file but cannot compile it, it returns undef and sets an error message in $#. If the file is successfully compiled, do returns the value of the last expression evaluated.
To check all these cases, we can no longer use simply do "func.inc" but
unless (defined do $func) {
my $error = $! || $#;
die "$0: do $func: $error";
}
Explanations for each case are below.
do cannot read the file
If we rename func.inc to nope.inc and rerun the program, we get
./prog: do func.inc: No such file or directory at ./prog line 12.
do can read the file but cannot compile it
Rename nope.inc back to func.inc and delete the closing curly brace in hello to make it look like
sub hello {
print "Hello, world!\n";
1;
Running the program now, we get
./prog: do func.inc: Missing right curly or square bracket at C:/Cygwin/tmp/mylib/func.inc line 4, at end of line
syntax error at C:/Cygwin/tmp/mylib/func.inc line 4, at EOF
do can read the file and compile it, but it does not return a true value.
Delete the 1; at the end of func.inc to make it
sub hello {
print "Hello, world!\n";
}
Now the output is
./prog: do func.inc: at ./prog line 13.
So without a return value, success resembles failure. We could complicate the code that checks the result of do, but the better choice is to always return a true value at the end of Perl libraries and modules.
Note that the program runs correctly even with taint checking (-T) enabled. Try it and see! Be sure to read Taint mode and #INC in perlsec.
You use the subroutine the same way that you'd use any other subroutine. It doesn't matter that you loaded it with do. However, you shouldn't use do for that. Check out the "Packages" chapter in Intermediate Perl for a detailed explanation of loading subroutines from other files. In short, use require instead.
See the documentation for do. You need to have func.inc (which you can also just call func.pl since pl is "perl library") in one of the directories where Perl will look for libraries. That might be different than the directory that has index.pl. Put func.inc in #INC somewhere, or add its directory to #INC. do also doesn't die if it can't load the file, so it doesn't tell you that it failed. That's why you shouldn't use do to load libraries. :)
Making sure the path is correct, use:
#!/usr/bin/perl
require("func.inc");
print "Content-type: text/html\n\n";
print foobar();
I would first check if the file was actually loaded, the documentation for do mentions that it updates %INC if the file was found. There is also more information in the documentation.
make sure you have func.inc in the correct path.
do "func.inc"
means you are saying func.inc is in the same path as your perl script. check the correct path and then do this
do "/path/func.inc"