I have a perl script which takes 2 arguments as follows and calls appropriate function depending on the argument. I call this script from bash, but i want to call it from perl, is it possible?
/opt/sbin/script.pl --group="value1" --rule="value2";
Also the script exits with a return value that I would like to read.
The Perl equivalent of sh command
/opt/sbin/script.pl --group="value1" --rule="value2"
is
system('/opt/sbin/script.pl', '--group=value1', '--rule=value2');
You could also launch the command in a shell by using the following, though I'd avoid doing so:
system(q{/opt/sbin/script.pl --group="value1" --rule="value2"});
Just like you'd have to do in sh, you'll have to follow up with some error checking (whichever approach you took). You can do so by using use autodie qw( system );. Check the docs for how to do it "manually" if you want more flexibility.
If you want to capture the output:
$foo = `/opt/sbin/script.pl --group="value1" --rule="value2"`;
If you want to capture the exit status, but send script.pl's output to stdout:
$status = system "/opt/sbin/script.pl --group=value1 --rule=value2";
If you want to read its output like from a file:
open SCRIPT, "/opt/sbin/script.pl --group=value1 --rule=value2 |" or die $!;
while (<SCRIPT>) ...
Yes. You can use system, exec, or <backticks>.
The main difference between system and exec is that exec "executes a command and never returns.
Example of system:
system("perl", "foo.pl", "arg");
Related
I have a Perl CGI program. myperlcgi.pl
Within this program I have the following:
my $retval = system('extprogram')
extprogram has a print statement within.
The output from extprogram is being included within myperlcgi.pl
I tried adding
> output.txt 2>&1
to system call but did not change anything.
How do I prevent output form extprogram being used in myperlcgi.pl.
Surprised that stdout from system call is being used in myperlcgi.pl
The system command just doesn’t give you complete control over capturing STDOUT and STDERR of the executed command.
Use backticks or open to execute the command instead. That will capture the STDOUT of the command’s execution. If the command also outputs to STDERR, then you can append 2>&1 to redirect STDERR to STDOUT for capture in backticks, like so:
my $output = `$command 2>&1`;
If you really need the native return status of the executed command, you can get that information using $? or ${^CHILD_ERROR_NATIVE}. See perldoc perlvar for details.
Another option is to use the IPC::Open3 Perl library, but I find that method to be overkill for most situations.
What would be an example of how I can call a shell command, say 'ls -a' in a Perl script and the way to retrieve the output of the command as well?
How to run a shell script from a Perl program
1. Using system system($command, #arguments);
For example:
system("sh", "script.sh", "--help" );
system("sh script.sh --help");
System will execute the $command with
#arguments and return to your script when finished. You may check $!
for certain errors passed to the OS by the external application. Read
the documentation for system for the nuances of how various
invocations are slightly different.
2. Using exec
This is very similar to the use of system, but it will
terminate your script upon execution. Again, read the documentation
for exec for more.
3. Using backticks or qx//
my $output = `script.sh --option`;
my $output = qx/script.sh --option/;
The backtick operator and it's equivalent qx//, excute the command and options inside the operator and return that commands output to STDOUT when it finishes.
There are also ways to run external applications through creative use of open, but this is advanced use; read the documentation for more.
From Perl HowTo, the most common ways to execute external commands from Perl are:
my $files = `ls -la` — captures the output of the command in $files
system "touch ~/foo" — if you don't want to capture the command's output
exec "vim ~/foo" — if you don't want to return to the script after executing the command
open(my $file, '|-', "grep foo"); print $file "foo\nbar" — if you want to pipe input into the command
Examples
`ls -l`;
system("ls -l");
exec("ls -l");
Look at the open function in Perl - especially the variants using a '|' (pipe) in the arguments. Done correctly, you'll get a file handle that you can use to read the output of the command. The back tick operators also do this.
You might also want to review whether Perl has access to the C functions that the command itself uses. For example, for ls -a, you could use the opendir function, and then read the file names with the readdir function, and finally close the directory with (surprise) the closedir function. This has a number of benefits - precision probably being more important than speed. Using these functions, you can get the correct data even if the file names contain odd characters like newline.
As you become more experienced with using Perl, you'll find that there are fewer and fewer occasions when you need to run shell commands. For example, one way to get a list of files is to use Perl's built-in glob function. If you want the list in sorted order you could combine it with the built-in sort function. If you want details about each file, you can use the stat function. Here's an example:
#!/usr/bin/perl
use strict;
use warnings;
foreach my $file ( sort glob('/home/grant/*') ) {
my($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,$atime,$mtime,$ctime,$blksize,$blocks)
= stat($file);
printf("%-40s %8u bytes\n", $file, $size);
}
There are a lot of ways you can call a shell command from a Perl script, such as:
back tick
ls which captures the output and gives back to you.
system
system('ls');
open
Refer #17 here: Perl programming tips
You might want to look into open2 and open3 in case you need bidirectional communication.
I have been using system and qq to run linux programs inside perl. And it has worked well.
#!/usr/bin/perl # A hashbang line in perl
use strict; # It can save you a lot of time and headache
use warnings; # It helps you find typing mistakes
# my keyword in Perl declares the listed variable
my $adduser = '/usr/sbin/adduser';
my $edquota = '/usr/sbin/edquota';
my $chage = '/usr/bin/chage';
my $quota = '/usr/bin/quota';
my $nomeinteiro;
my $username;
my $home;
# system() function executes a system shell command
# qq() can be used in place of double quotes
system qq($adduser --home $home --gecos "$fullname" $username);
system qq($edquota -p john $username);
system qq($chage -E \$(date -d +180days +%Y-%m-%d) $username);
system qq($chage -l $username);
system qq($quota -s $username);
What would be an example of how I can call a shell command, say 'ls -a' in a Perl script and the way to retrieve the output of the command as well?
How to run a shell script from a Perl program
1. Using system system($command, #arguments);
For example:
system("sh", "script.sh", "--help" );
system("sh script.sh --help");
System will execute the $command with
#arguments and return to your script when finished. You may check $!
for certain errors passed to the OS by the external application. Read
the documentation for system for the nuances of how various
invocations are slightly different.
2. Using exec
This is very similar to the use of system, but it will
terminate your script upon execution. Again, read the documentation
for exec for more.
3. Using backticks or qx//
my $output = `script.sh --option`;
my $output = qx/script.sh --option/;
The backtick operator and it's equivalent qx//, excute the command and options inside the operator and return that commands output to STDOUT when it finishes.
There are also ways to run external applications through creative use of open, but this is advanced use; read the documentation for more.
From Perl HowTo, the most common ways to execute external commands from Perl are:
my $files = `ls -la` — captures the output of the command in $files
system "touch ~/foo" — if you don't want to capture the command's output
exec "vim ~/foo" — if you don't want to return to the script after executing the command
open(my $file, '|-', "grep foo"); print $file "foo\nbar" — if you want to pipe input into the command
Examples
`ls -l`;
system("ls -l");
exec("ls -l");
Look at the open function in Perl - especially the variants using a '|' (pipe) in the arguments. Done correctly, you'll get a file handle that you can use to read the output of the command. The back tick operators also do this.
You might also want to review whether Perl has access to the C functions that the command itself uses. For example, for ls -a, you could use the opendir function, and then read the file names with the readdir function, and finally close the directory with (surprise) the closedir function. This has a number of benefits - precision probably being more important than speed. Using these functions, you can get the correct data even if the file names contain odd characters like newline.
As you become more experienced with using Perl, you'll find that there are fewer and fewer occasions when you need to run shell commands. For example, one way to get a list of files is to use Perl's built-in glob function. If you want the list in sorted order you could combine it with the built-in sort function. If you want details about each file, you can use the stat function. Here's an example:
#!/usr/bin/perl
use strict;
use warnings;
foreach my $file ( sort glob('/home/grant/*') ) {
my($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,$atime,$mtime,$ctime,$blksize,$blocks)
= stat($file);
printf("%-40s %8u bytes\n", $file, $size);
}
There are a lot of ways you can call a shell command from a Perl script, such as:
back tick
ls which captures the output and gives back to you.
system
system('ls');
open
Refer #17 here: Perl programming tips
You might want to look into open2 and open3 in case you need bidirectional communication.
I have been using system and qq to run linux programs inside perl. And it has worked well.
#!/usr/bin/perl # A hashbang line in perl
use strict; # It can save you a lot of time and headache
use warnings; # It helps you find typing mistakes
# my keyword in Perl declares the listed variable
my $adduser = '/usr/sbin/adduser';
my $edquota = '/usr/sbin/edquota';
my $chage = '/usr/bin/chage';
my $quota = '/usr/bin/quota';
my $nomeinteiro;
my $username;
my $home;
# system() function executes a system shell command
# qq() can be used in place of double quotes
system qq($adduser --home $home --gecos "$fullname" $username);
system qq($edquota -p john $username);
system qq($chage -E \$(date -d +180days +%Y-%m-%d) $username);
system qq($chage -l $username);
system qq($quota -s $username);
I'm writing a perl script that mimics gcc. This my script needs to process some stdout from gcc. The part for processing is done, but I can't get the simple part working: how can I forward all the command line parameters as is to the next process (gcc in my case). Command lines sent to gcc tend to be very long and can potentially contain lots of escape sequences and I don't want now to play that game with escaping and I know that it's tricky to get it right on windows in complicated cases.
Basically,
gcc.pl some crazies\ t\\ "command line\"" and that gcc.pl has to forward that same command line to real gcc.exe (I use windows).
I do it like that: open("gcc.exe $cmdline 2>&1 |") so that stderr from gcc is fed to stdout and I my perl script processes stdout. The problem is that I can't find anywhere how to construct that $cmdline.
I would use AnyEvent::Subprocess:
use AnyEvent::Subprocess;
my $process_line = sub { say "got line: $_[0]" };
my $gcc = AnyEvent::Subprocess->new(
code => ['gcc.exe', #ARGV],
delegates => [ 'CompletionCondvar', 'StandardHandles', {
MonitorHandle => {
handle => 'stdout',
callback => $process_line,
}}, {
MonitorHandle => {
handle => 'stderr',
callback => $process_line,
}},
],
);
my $running = $gcc->run;
my $done = $running->recv;
$done->is_success or die "OH NOES";
say "it worked";
The MonitorHandle delegate works like redirection, except you have the option of using a separate filter for each of stdout and stderr. The "code" arg is an arrayref representing a command to run.
"Safe Pipe Opens" in the perlipc documentation describes how to get another command's output without having to worry about how the shell will parse it. The technique is typically used for securely handling untrusted inputs, but it also spares you the error-prone task of correctly escaping all the arguments.
Because it sidesteps the shell, you'll need to create the effect of 2>&1 yourself, but as you'll see below, it's straightforward to do.
#! /usr/bin/perl
use warnings;
use strict;
my $pid = open my $fromgcc, "-|";
die "$0: fork: $!" unless defined $pid;
if ($pid) {
while (<$fromgcc>) {
print "got: $_";
}
}
else {
# 2>&1
open STDERR, ">&STDOUT" or warn "$0: dup STDERR: $!";
no warnings "exec"; # so we can write our own message
exec "gcc", #ARGV or die "$0: exec: $!";
}
Windows proper does not support open FH, "-|", but Cygwin does so happily:
$ ./gcc.pl foo.c
got: gcc: foo.c: No such file or directory
got: gcc: no input files
Read up on the exec function and the system function in Perl.
If you provide either of these with an array of arguments (rather than a single string), it invokes the Unix execve() function or a close relative directly, without letting the shell interpret anything, exactly as you need it to do.
Thanks for answers, I came to conclusion that I made a big mistake that I touched perl again: hours of time wasted to find out that it can't be done properly.
Perl uses different way to split command line parameters than all other apps that use MS stdlib (which is standard on win32).
Because of that some commandline parameters that were meant to be interpreted as a signle commandline argument, by perl can be interpreted as more than one argument. That means that all what I'm trying to do is waste of time because of that buggy behavior in perl. It's impossible to get this task done correctly if I 1) can't access original command line as is and 2) perl doesn't split command line arguments correctly.
as a simple test:
script.pl """test |test"
on win32 will incorrectly interpret command line as:
ARGV=['"test', '|test']
Whereas, the correct "answer" on windows has to be
ARGV=['"test |test']
I used activestate perl, I tried also latest version of strawberry perl: both suck. It appears that perl that comes with msys works properly, most likely because it was built against mingw instead of cygwin runtime?..
The problem and reason with perl is that it has buggy cmd line parser and it won't work on windows NO MATTER WHAT cygwin supports or not.
I have a simple case where an environment variable (which I cannot control) expands to
perl gcc.pl -c "-IC:\ffmpeg\lib_avutil\" rest of args
Perl sees that I have two args only: -c and '-IC:\ffmpeg\lib_avutil" rest of args'
whereas any conforming windows implementation receives second cmd line arg as: '-IC:\ffmpeg\lib_avutil\', that mean that perl is a huge pile of junk for my simple case, because it doesn't provide adequate means to access cmd line arguments. I'm better off using boost::regex and do all my parsing in c++ directly, at least I won't ever make dumb mistakes like ne and != for comparing strings etc. Windows's escaping rules for command line arguments are quite strange, but they are standard on windows and perl for some strange reason doesn't want to follow OS's rules.
Can someone please help me? In Perl, what is the difference between:
exec "command";
and
system("command");
and
print `command`;
Are there other ways to run shell commands too?
exec
executes a command and never returns.
It's like a return statement in a function.
If the command is not found exec returns false.
It never returns true, because if the command is found it never returns at all.
There is also no point in returning STDOUT, STDERR or exit status of the command.
You can find documentation about it in perlfunc,
because it is a function.
system
executes a command and your Perl script is continued after the command has finished.
The return value is the exit status of the command.
You can find documentation about it in perlfunc.
backticks
like system executes a command and your perl script is continued after the command has finished.
In contrary to system the return value is STDOUT of the command.
qx// is equivalent to backticks.
You can find documentation about it in perlop, because unlike system and execit is an operator.
Other ways
What is missing from the above is a way to execute a command asynchronously.
That means your perl script and your command run simultaneously.
This can be accomplished with open.
It allows you to read STDOUT/STDERR and write to STDIN of your command.
It is platform dependent though.
There are also several modules which can ease this tasks.
There is IPC::Open2 and IPC::Open3 and IPC::Run, as well as
Win32::Process::Create if you are on windows.
In general I use system, open, IPC::Open2, or IPC::Open3 depending on what I want to do. The qx// operator, while simple, is too constraining in its functionality to be very useful outside of quick hacks. I find open to much handier.
system: run a command and wait for it to return
Use system when you want to run a command, don't care about its output, and don't want the Perl script to do anything until the command finishes.
#doesn't spawn a shell, arguments are passed as they are
system("command", "arg1", "arg2", "arg3");
or
#spawns a shell, arguments are interpreted by the shell, use only if you
#want the shell to do globbing (e.g. *.txt) for you or you want to redirect
#output
system("command arg1 arg2 arg3");
qx// or ``: run a command and capture its STDOUT
Use qx// when you want to run a command, capture what it writes to STDOUT, and don't want the Perl script to do anything until the command finishes.
#arguments are always processed by the shell
#in list context it returns the output as a list of lines
my #lines = qx/command arg1 arg2 arg3/;
#in scalar context it returns the output as one string
my $output = qx/command arg1 arg2 arg3/;
exec: replace the current process with another process.
Use exec along with fork when you want to run a command, don't care about its output, and don't want to wait for it to return. system is really just
sub my_system {
die "could not fork\n" unless defined(my $pid = fork);
return waitpid $pid, 0 if $pid; #parent waits for child
exec #_; #replace child with new process
}
You may also want to read the waitpid and perlipc manuals.
open: run a process and create a pipe to its STDIN or STDERR
Use open when you want to write data to a process's STDIN or read data from a process's STDOUT (but not both at the same time).
#read from a gzip file as if it were a normal file
open my $read_fh, "-|", "gzip", "-d", $filename
or die "could not open $filename: $!";
#write to a gzip compressed file as if were a normal file
open my $write_fh, "|-", "gzip", $filename
or die "could not open $filename: $!";
IPC::Open2: run a process and create a pipe to both STDIN and STDOUT
Use IPC::Open2 when you need to read from and write to a process's STDIN and STDOUT.
use IPC::Open2;
open2 my $out, my $in, "/usr/bin/bc"
or die "could not run bc";
print $in "5+6\n";
my $answer = <$out>;
IPC::Open3: run a process and create a pipe to STDIN, STDOUT, and STDERR
use IPC::Open3 when you need to capture all three standard file handles of the process. I would write an example, but it works mostly the same way IPC::Open2 does, but with a slightly different order to the arguments and a third file handle.
Let me quote the manuals first:
perldoc exec():
The exec function executes a system command and never returns-- use system instead of exec if you want it to return
perldoc system():
Does exactly the same thing as exec LIST , except that a fork is done first, and the parent process waits for the child process to complete.
In contrast to exec and system, backticks don't give you the return value but the collected STDOUT.
perldoc `String`:
A string which is (possibly) interpolated and then executed as a system command with /bin/sh or its equivalent. Shell wildcards, pipes, and redirections will be honored. The collected standard output of the command is returned; standard error is unaffected.
Alternatives:
In more complex scenarios, where you want to fetch STDOUT, STDERR or the return code, you can use well known standard modules like IPC::Open2 and IPC::Open3.
Example:
use IPC::Open2;
my $pid = open2(\*CHLD_OUT, \*CHLD_IN, 'some', 'cmd', 'and', 'args');
waitpid( $pid, 0 );
my $child_exit_status = $? >> 8;
Finally, IPC::Run from the CPAN is also worth looking at…
What's the difference between Perl's backticks (`), system, and exec?
exec -> exec "command"; ,
system -> system("command"); and
backticks -> print `command`;
exec
exec executes a command and never resumes the Perl script. It's to a script like a return statement is to a function.
If the command is not found, exec returns false. It never returns true, because if the command is found, it never returns at all. There is also no point in returning STDOUT, STDERR or exit status of the command. You can find documentation about it in perlfunc, because it is a function.
E.g.:
#!/usr/bin/perl
print "Need to start exec command";
my $data2 = exec('ls');
print "Now END exec command";
print "Hello $data2\n\n";
In above code, there are three print statements, but due to exec leaving the script, only the first print statement is executed. Also, the exec command output is not being assigned to any variable.
Here, only you're only getting the output of the first print statement and of executing the ls command on standard out.
system
system executes a command and your Perl script is resumed after the command has finished. The return value is the exit status of the command. You can find documentation about it in perlfunc.
E.g.:
#!/usr/bin/perl
print "Need to start system command";
my $data2 = system('ls');
print "Now END system command";
print "Hello $data2\n\n";
In above code, there are three print statements. As the script is resumed after the system command, all three print statements are executed.
Also, the result of running system is assigned to data2, but the assigned value is 0 (the exit code from ls).
Here, you're getting the output of the first print statement, then that of the ls command, followed by the outputs of the final two print statements on standard out.
backticks (`)
Like system, enclosing a command in backticks executes that command and your Perl script is resumed after the command has finished. In contrast to system, the return value is STDOUT of the command. qx// is equivalent to backticks. You can find documentation about it in perlop, because unlike system and exec, it is an operator.
E.g.:
#!/usr/bin/perl
print "Need to start backticks command";
my $data2 = `ls`;
print "Now END system command";
print "Hello $data2\n\n";
In above code, there are three print statements and all three are being executed. The output of ls is not going to standard out directly, but assigned to the variable data2 and then printed by the final print statement.
The difference between 'exec' and 'system' is that exec replaces your current program with 'command' and NEVER returns to your program. system, on the other hand, forks and runs 'command' and returns you the exit status of 'command' when it is done running. The back tick runs 'command' and then returns a string representing its standard out (whatever it would have printed to the screen)
You can also use popen to run shell commands and I think that there is a shell module - 'use shell' that gives you transparent access to typical shell commands.
Hope that clarifies it for you.