How to run an executable file using perl?
For instance, i want to run a plain notepad.exe. How could I achieve this?
This is what I've got:
my #args = system("notepad.exe");
system(#args) == 0 or die "system #args failed: $?";
But it returns:
Can't spawn "cmd.exe": No such file or directory blah blah blah.
What am I missing?
Your code seems a bit confused. What you probably want is something like
my $cmd = "notepad.exe";
my #args = ($cmd, "readme.txt");
system(#args);
if($? == -1) {
die "system #args failed: $?";
}
system returns a single value, not an array. See perldoc -f system for a detailed description.
This thread on perlmonks discusses the error you're getting with a few different solutions being presented.
This answer is an extension of my original comment. Sorry if it's superfluous.
Try this.
my $prog = "C:\\strawberry\\perltest\\Extractor.bat";
if (-f $prog) # does it exist?
{
print "Will run notepad";
system($prog);
}
else
{
print "$prog doesn't exist.";
}
This is a Perl internal error probably caused by a broken environment. Perl can't find the Windows shell cmd.exe that is used under the hood to run the program passed to system.
Use some utility as Process Monitor to see what's going on at the OS level.
Related
I'm trying to capture the output of the command executed as a different user using:
my $command = qq(sudo su - <username> -c '/usr/bin/whatever');
my $pid = open $cmdOutput, "-|", $command;
How can I capture the STDERR of /usr/bin/whatever?
I tried
$pid = open $cmdOutput, "-|", $command || die " something went wrong: $!";
but it looks like this is capturing the possible errors of "open" itself.
I also tried
my $command = qq(sudo su - <username> -c '/usr/bin/whatever' 2>/tmp/error.message);
which will redirect the STDERR to the file, which I can parse later, but I wanted some more straightforward solution.
Also, I only want to use core modules.
This is covered thoroughly in perlfaq8. Since you are using a piped open, the relevant examples are those that go by open3 from the core IPC::Open3 module.
Another option is to use IPC::Run for managing your processes, and the pump function will do what you need. The IPC::Open3 documentation says for IPC::Run
This is a CPAN module that has better error handling and more facilities than Open3.
With either of these you can manipulate STDOUT and STDERR separately or together, as needed. For convenient and complete output capture also see Capture::Tiny.
Other than 2>output redirection, there are no more elementary methods for the piped open.
If you don't mind mixing the streams or losing STDOUT altogether, another option is
my $command = 'cmd 2>&1 1>/dev/null' # Remove 1>/dev/null to have both
my $pid = open my $cmdOutput, "-|", $command;
while (<$cmdOutput>) { print } # STDERR only
The first redirection merges STDERR stream with STDOUT so you get them both, and mixed (with STDOUT subject to buffering, thus things may well come out of order). The second redirect sends the STDOUT away so with it in place you read only the command's STDERR from the handle.
The question is about running an external command using open but I'd like to mention that the canonical and simple qx (backticks) can be used in the same way. It returns the STDOUT so redirection just like above is needed to get STDERR. For completeness:
my $cmd = 'cmd_to_execute';
my $allout = qx($cmd 2>&1); # Both STDOUT and STDERR in $out, or
my $stderr = qx($cmd 2>&1 1>/dev/null); # Only STDERR
my $exit_status = $?;
The qx puts the child process exit code (status) in $?. This can then be inspected for failure modes; see a summary in the qx page or a very thorough discussion in I/O operators in perlop.
Note that the STDERR returned this way is from the command, if it ran. If the command itself couldn't be run (for a typo in command name, or fork failed for some reason) then $? will be -1 and the error will be in $!.
As suggested by zdim I used the IPC::Open3 module for the matter and I've got something like this doing the job for me
$instanceCommand = qq(sudo su - <username> -c '<command>');
my ($infh,$outfh,$errfh,$pid);
$errfh = gensym();
$pid = open3($infh, $outfh, $errfh, $instanceCommand);
my $sel = new IO::Select;
$sel->add($outfh,$errfh);
while (my #ready = $sel->can_read){
foreach my $fh (#ready){
my $line =<$fh>;
if (not defined $line){
$sel->remove($fh);
next;
}
if ($fh == $outfh){
chomp($line);
#<----- command output processing ----->
}
elsif ($fh == $errfh){
chomp $line;
#<----- command error processing ----->
}
else {
die "Reading from something else\n";
}
}
}
waitpid($pid, 0);
Maybe not completely bullet proof, but its working fine for me. Even whilst executing funny cascaded script as < command > .
The desired destination, opened for writing, could be dup()'ed to FD #2
I want to capture the standard error displayed on host machine after (ssh->capture) to a variable.
for example when i try:
use Net::OpenSSH;
my $ssh = Net::OpenSSH->new($host);
my $out=$ssh->capture("cd /home/geek");
$ssh->error and
die "remote cd command failed: " . $ssh->error;
out put is:
child exited with code 1 at ./change_dir.pl line 32
i am not able to see what is the error. i get no such file or directory on the terminal. I want to capture the same "no such file or director" in $out.
example 2,
my ($stdout,$stderr)=$ssh->capture("cd /home/geek");
if($stderr)
print"Error = $stderr";
else
print "$stdout"
i see "Error=" printed but does not seee that $stderr on the screen.
i see $stdout is printed on success but print $stderr does not get printed only"Error= " gets printed.
When an error occurs it is most likely not going to be in STDOUT, and if it is in STDERR you are not catching that. You need to get to the application's exit code, in the following way. (Given the update to the question which I only see now: See the end for how to get STDERR.)
After the capture method you want to examine $? for errors (see Net-OpenSSH). Unpack that to get to the exit code returned by what was actually run by $ssh, and then look in that application's docs to see what that code means
$exit_code = $?;
if ($exit_code) {
$app_exit = $exit_code >> 8;
warn "Error, bit-shift \$? --> $app_exit";
}
The code to investigate is $app_exit.
An example. I use zip in a project and occasionally catch the error of 3072 (that is the $?). When that's unpacked as above I get 12, which is zip's actual exit. I look up its docs and it nicely lists its exit codes and 12 means Nothing to update. That's the design decision for zip, to exit with 12 if it had no files to update in the archive. Then that exit gets packaged into a two-byte number (in the upper byte), and that is returned and so it is what I get in $?.
Failure modes in general, from system in Perl docs
if ($? == -1) { warn "Failed to execute -- " }
elsif ($? & 127) {
$msg = sprintf("\tChild died with signal %d, %s coredump -- ",
($? & 127), ($? & 128) ? 'with' : 'without');
warn $msg;
} else {
$msg = sprintf("\tChild exited with value %d -- ", $? >> 8);
warn $msg;
}
The actual exit code $? >> 8 is supplied by whatever ran and so its interpretation is up to that application. You need to look through its docs and hopefully its exit codes are documented.
Note that $ssh->error seems designed for this task. From the module's docs
my $output = $ssh->capture({ timeout => 10 }, "echo hello; sleep 20; echo bye");
$ssh->error and warn "operation didn't complete successfully: ". $ssh->error;
The printed error needs further investigation. Docs don't say what it is, but I'd expect the unpacked code discussed above (the question update indicates this). Here $ssh only runs a command and it doesn't know what went wrong. It merely gets back the command's exit code, to be looked at.
Or, you can modify the command to get the STDERR on the STDOUT, see below.
The capture method is an equivalent of Perl's backticks (qx). There is a lot on SO on how to get STDERR from backticks, and Perl's very own FAQ has that nicely written up in perlfaq8. A complication here is that this isn't qx but a module's method and, more importantly, it runs on another machine. However, the "output redirection" method should still work without modifications. The command (run by $ssh) can be written so that its STDERR is redirected to its STDOUT.
$cmd_all_output = 'your_whole_command 2>&1';
$ssh->capture($cmd_all_output);
Now you will get the error that you see at the terminal ("no such file or directory") printed on STDOUT and so it will wind up in your $stdout. Note that one must use sh shell syntax, as above. There is a big bit more to it so please look it up (but this should work as it stands). Most of the time it is the same message as in the exit code description.
The check that you have in your code is good, the first line of defense: One should always check $? when running external commands, and for this the command to run need not be touched.
I have perl version v5.8.3 installed on my windows machine.
While running a perl script having the below code, failing.
if(-e $file1)
I knew that this checks whether file1 is present or not.
The error just shown "perl command failed". Nothing else.
Could you please help me on this
You're using a version of Perl from 2004. You should seriously consider upgrading.
The file test operators like -e have been part of Perl for a very long time. They are certainly supported by Perl 5.8.3.
You say that your error is "perl command failed". That is not an error that is generated by Perl, so I suspect there is something else going on here that you're not telling us about (presumably because you think it isn't important).
If I had to guess why your -e test is failing, I'd say that it's because $file1 doesn't contain any information about the directory where the file can be found, and therefore Perl is looking in the wrong place. Perhaps you can get more information with code like this:
use Cwd;
if (-e $file1) {
...
} else {
die "Can't find file: " . cwd() . '/' . $file1;
}
This will show you where Perl is looking for the file.
In Perl, one way to read the STDOUT of a subprocess is to use open:
open(PIPE, "ls -l |");
I was looking for a more object-oriented way to do this, though, and I've been using IO::Pipe with some success. I want to detect errors, though, specifically if the command is not executable. I can't figure out how to do that via IO::Pipe, though. Here's what I have:
use strict;
use warnings;
use IO::Pipe;
my($cmd) = join (" ", #ARGV);
open(PIPE, "$cmd |") || die qq(error opening PIPE);
while (<PIPE>) {
chomp;
print "DBG1: $_\n";
}
close PIPE;
my($pipe) = IO::Pipe->new();
$pipe->reader($cmd);
die qq(error opening IO::Pipe) if $pipe->eof();
while (<$pipe>) {
chomp;
print "DBG2: $_\n";
}
$pipe->close();
If the sub-process command is invalid, both checks will correctly die. If the sub-process produces no output, though, eof() will report an error, even if the command itself is fine:
$ perl pipe.pl "ls -l >/dev/null"
error opening IO::Pipe at pipe.pl line 20.
A bunch of questions, then:
Is there a reasonable OO way to read from a sub-process in Perl? Is IO::Pipe the correct tool to use? If so, how do I check to make sure the sub-process command is created successfully? If not, what should I be using? I don't want to write to the sub-process, so I don't think I want IPC::Open2 or IPC::Open3. I would prefer to use a core module, if possible.
The issue is not IO::Pipe. The problem is eof is the wrong way to check for a pipe error. It doesn't mean there's no pipe, it means there's nothing to read from that pipe. You'd have the same problem with eof PIPE. It's perfectly fine for a sub-process to not print anything.
If you want to check the sub-process successfully ran, it turns out IO::Pipe already does that for you.
# IO::Pipe: Cannot exec: No such file or directory
$pipe->reader("hajlalglagl");
Backticks is not a core module but seems to do what your looking for.
use strict;
use warnings;
use Backticks;
my $cmd = Backticks->new(join (" ", #ARGV));
$cmd->run();
if ($cmd->success){
print $cmd->stdout
} else {
print "Command failed\n";
}
Running this with a valid command then invalid command gives the below results
io_pipe.pl "uname -o"
GNU/Linux
io_pipe.pl "uname -z"
Command failed
Update
As pointed out by #thisSuitIsNotBlack, this module changes the deafult behaviour of backticks in perl. You should read the Notes section of the POD. However the main one to be aware of is:
The overriding of backticks is provided by Filter::Simple. Source
filtering can be weird sometimes... if you want to use this module in
a purely traditional Perl OO style, simply turn off the source
filtering as soon as you load the module:
use Backticks;
no Backticks;
I have this Perl code:
foreach my $eachFile (#FileList)
{
$cmd = "xcopy /I /F /V /Y /R \"$ProjectPath\\$eachFile\" \"$NewPath\\\"";
print "\n\t$cmd\n";
my $result = system($cmd);
die "ERROR: Could not copy from $viewPath to $DLPath: $!" if ($result > 0);
}
In this code, I have put lines like: system("pause") if ($debug) which pauses the execution when $debug variable is set.
Now, the above xcopy dies as one file is not present. But, the $! prints : "Bad file descriptor" when run normally, and it print : "No such file or directory" when I set the $debug variable.
Any idea why does $! give different message for the two instances?
See the documentation for system. $! is only meaningful if system itself fails and returns -1. Otherwise, you have to figure out what the error means on your own by picking apart $?. A sample from the documentation:
if ($? == -1) {
print "failed to execute: $!\n";
}
elsif ($? & 127) {
printf "child died with signal %d, %s coredump\n",
($? & 127), ($? & 128) ? 'with' : 'without';
}
else {
printf "child exited with value %d\n", $? >> 8;
}
So you shift $? right by 8 bits, and that's the error code returned by xcopy. There's a list of xcopys error codes here.
That's the way it should behave, anyway. In my testing with this:
my $ProjectPath = "c:\\temp";
my #FileList = qw(aaa.txt asdf bbb.txt empty_dir beans.txt foo.pl);
my $NewPath = "c:\\temp2";
foreach my $eachFile (#FileList) {
print "Copying $eachFile\n";
my $result = system(
qw(xcopy /I /F /V /Y /R), "$ProjectPath\\$eachFile", "$NewPath\\"
);
if ($? == -1) {
die "failed to execute: $!";
}
elsif ($? > 0) {
my $foo = $? >> 8;
print "Error code: $foo\n";
die "no files found to copy" if $foo == 1;
die "user pressed ctrl-c during copy" if $foo == 2;
die "initialization error" if $foo == 4;
die "disk write error writing" if $foo == 5;
die "unknown error";
}
}
It reports "initialization error" when it finds that there is no beans.txt in my temp directory. I don't know what I'd have to do to make xcopy return a 1. C'est la vie.
Don't use a system command when you can use Perl. Otherwise, why bother with Perl?
There is a file copy command in Perl. It's not part of the base commands, but is available via the File::Copy module. Don't be confused into thinking that this isn't part of Standard Perl, and shouldn't be used. The File::Copy module has been around for a long, long time, and is part of every Perl installation you will run into. Learn the various standard Perl modules, and use them as if they are part of the base Perl because they are.
And, don't be afraid of installing CPAN Perl modules. These modules can extend Perl in wonderful ways (for example, parsing JSON, YAML, and XML files or interacting with the World Wide Web). Remember that the best Perl programmers are darn lazy and won't write code if someone else has already did it for them.
All I had to do to get that copy command was to add use File::Copy; in the top of my Perl script.
Note that my loop contains three lines:
I verify the file exists via the -f command.
I tell you what I am doing (You had /V there in your xcopy command)
I copy the file.
Also note that I changed your variable names. In Perl, the standard is to use underscores and lowercase for variable names. Also, it's now standard to simply say for instead of foreach which saves you typing four characters.
I use the use feature qw(say); pragma which enables the say command. I like it much better than print since say automatically adds the \n to the end of the line. This sounds like a tiny thing, but after you forget \n a few times, of find situations where putting \n changes your output in ways you didn't anticipate, you appreciate the say command.
I use another module called File::Path which gives me the make_path command. This command is an extension of the built in mkdir command. However, it makes the directory and all parent directories required to make the directory you want. If you don't need to worry about making the entire path, you can simply use mkdir and not include File::Path.
use strict;
use warnings;
use feature qw(say);
use File::Copy;
use File::Path qw(make_path);
...
if ( not -d $project_path ) {
make_path $project_path
or die qq(Can't create directory "$project_path");
}
}
for my $file in ( #files ) {
if not ( -f "$project_path/$file" ) {
die qq(Can't copy non-existant file "$project_path/$file".);
}
say qq(Copying "$project_path/$file" to "$new_path");
copy "$project_path/$file", $new_path
or die qq(Can't copy "$file": $!);
}
Yes, this doesn't answer your direct question. However, I wanted you to know the Perl way of doing what you want, so you could become a better Perl programmer.
'\' as a separator might be the issue, the Unix-style is '/'.