Perl backticks subprocess is causing EOF on STDIN - perl

I'm having this issue with my perl program that is reading from a file (which I open on STDIN and read each line one at a time using $line = <>). After I execute a `backtick` command, and then I go to read the next line from STDIN, I get an undef, signaling EOF. I isolated it to the backtick command using debugging code as follows:
my $dir = dirname(__FILE__);
say STDERR "before: tell(STDIN)=" . tell(STDIN) . ", eof(STDIN)=" . eof(STDIN);
say STDERR "\#export_info = `echo nostdin | perl $dir/pythonizer_importer.pl $fullfile`;";
#export_info = `echo nostdin | perl $dir/pythonizer_importer.pl $fullfile`;
say STDERR "after: tell(STDIN)=" . tell(STDIN) . ", eof(STDIN)=" . eof(STDIN);
The output is:
before: tell(STDIN)=15146, eof(STDIN)=
#export_info = `echo nostdin | perl ../pythonizer_importer.pl ./Pscan.pm`;
after: tell(STDIN)=15146, eof(STDIN)=1
I recently added the echo nostdin | to the perl command which had no effect. How do I run this command and get the STDOUT without messing up my STDIN? BTW, this is all running on Windows. I fire off the main program from a git bash if that matters.

Try locally undefining STDIN before running the backticks command, like this example script does. Note that any subroutines called from the sub that calls local will see the new value. You can also do open STDIN, "<", "file for child process to read"; after the local *STDIN but before the backticks but remember to close() the file before restoring STDIN to its old value.
The child process is affecting your STDIN because "the STDIN filehandle used by the command is inherited from Perl's STDIN." – perlop manual
This is just an example; in your actual script, replace the sed command with your actual command to run.
use strict;
use warnings;
#Run a command and get its output
sub get_output {
# Prevent passing our STDIN to child process
local *STDIN = undef;
print "Running sed\n";
#replace the sed command with the actual command you want to run
return `sed 's/a/b/'`;
}
my $output = get_output();
print $output;
#We can still read STDIN even after running a child process
print "Waiting for input\n";
print "Readline is " . scalar readline;
Input:
a
b
c
^D
line
Output:
Running sed
b
b
c
Waiting for input
Readline is line

Related

Can I pass a string from perl back to the calling c-shell?

RHEL6
I have a c-shell script that runs a perl script. After dumping tons of stuff to stdout, it determines where (what dir) the parent shell should cd to when the perl script finishes. But that's a string, not an int which is all I can pass back with "exit()".
Storing the name of the dir in a file which the c-shell script can read is what I have now. It works, but is not elegant. Is there a better way to do this ? Maybe a little chunk of memory that I can share with the perl script ?
Short:
Redirect Perl's streams and restore in the end to print that info, taken by the shell script
Or, print that last and the shell script can pass output to the console and take the last line
Or, use a named pipe (either shell) or specific file descriptors (not csh) for that print
When the Perl script prints out that name you can assign it to a variable
in the shell script
#!/bin/csh
set DIR `perl -e'print "dir_name"'`
while in bash
#!/bin/bash
DIR="$(perl -e'print "dir_name"')"
where $(...) is preferred for the command substitution.
But those other prints to console from the Perl script then need be handled
One way is to redirect all output in Perl script other than that one print, what can be controlled by a command-line option (filename to which to redirect, which shell script can print out)
Or, take all Perl's output and pass it to console, the last line being the needed "return." This puts the burden on the Perl script to print that last (perhaps in an END block). The program's output can be printed from the shell script after it completes or line by line as it is emitted.
Or, use a named pipe (both shells) or a specific file descriptor (bash only) to which the Perl script can print that information. In this case its streams go straight to the console.
The question explicitly mentions csh so it is given below. But I must repeat the old and worn fact that shell scripting is far better done in bash than in csh. I strongly recommend to reconsider.
bash
If you need the program's output on the console as it goes, take and print it line by line
#!/bin/bash
while read line; do
echo "$line"
DIR=$line
done < <(perl script.pl)
echo "$DIR"
Or, if you don't need output on the console before the script is finished
#!/bin/bash
mapfile -t lines < <(perl script.pl)
DIR="${lines[-1]}"
printf '%s\n' "${lines[#]}" # print script.pl's output
Or, use file descriptors for that particular print
F=$(mktemp) # safe filename
exec 3> "$F" # open fd 3 to write to it
exec 4< "$F" # open fd 4 to read from it
rm -f "$F" # remove file(name) for safety; opened fd's can still access
perl -E'$fd=shift; say "...normal prints to STDOUT...";
open(FH, ">&=$fd") or die $!;
say FH "dirname";
close FH
' 3
read dir_name <&4
exec 3>&- # close them
exec 4<&-
echo "$dir_name"
I couldn't get it to work with a single file descriptor for both reading and writing (exec 3<> ...), I think because the read can't rewind after the write, thus separate descriptors are used.
With a Perl script (and not the demo one-liner above) pass the fd number as a command-line option. The script can then do this only if it's invoked with that option.
Or, use a named pipe very similarly to how it's done for csh below. This is probably best here, if the manipulation of the program's STDOUT isn't to your liking.
csh
Iterate over the program's (completed) output line by line
#!/bin/csh
foreach line ( "`perl script.pl`" )
echo "$line"
set dir_name = "$line"
end
echo "Directory name: $dir_name"
or extract the last line first and then print the whole output
#!/bin/csh
set lines = ( "`perl script.pl`" )
set dir_name = $lines[$#]
# Print program's output
while ( $#lines )
echo "$lines[1]"
shift lines
end
or use a named pipe
set fifo_name = "/tmp/fifo$$" # or use mktemp
mkfifo "$fifo_name"
( perl script.pl --fifo $fifo_name [other args] & )
set dir_name = `cat "$fifo_name"`
rm -f $fifo_name
echo "dir name from FIFO: $dir_name"
The Perl command is in the background since FIFO blocks until written and read. So if the shell script were to wait for perl ... to complete the Perl script would block as it's writing to FIFO (since that's not being read) so shell would never get to read it; we would deadlock. It is also in a subshell, with ( ), so to avoid the informational prints about the background job.
The --fifo NAME command-line option is needed so that Perl script knows what special file to use (and not to do this if the option is not there).
For an in-line example replace ( perl script ...) with this one-liner, used above as well
( perl -E'$ff = shift; say qq(\t...normal prints to STDOUT...);
open FF, ">$ff" or die $!;
say FF "dir_name_$$";
close FF
' $fifo_name
& )
(broken over lines for readability)

Perl code - pipe "|" in `open()` statement

I have the following code in a Perl .pl file. Do you think there's any issue with this code (I can't understand how it'll work as in the 2nd line there's a "|" character without a command following it)
while ( $temp ne "" ) {
open( PS, "ps -ef | grep deploy.sh | grep ssh | grep -v grep|" );
$temp = <PS>;
close(PS);
print "The Deploy scripts are still running. Now sleeping 20\n";
sleep 20;
}
That stray | is a way of Perl of saying that you want the output of that command to be made available to your program. There are several equivalent forms.
Take a look here: open - perldoc.perl.org. Specially at the line that says:
open(FOO, "cat -n '$file'|");
open(my $FOO, "foo");
opens the file for reading, while
open(my $FOO, "foo |");
tell Perl that foo is a command to run whose output is to be piped to file handle $FOO.
Since open(FOO, "foo |") just reads from FOO the output of the foo command, each line in the output of the foo command will become a line in the FOO file. The following will be identical to the shell command 'ps -ef':
open(PS, 'ps -ef |');
while (<PS>) { print $_ }
The command in the 2nd line of your sample is shell pipe filtering the list to produce on the running instances of 'deploy.sh', if the file has a line then there still are instances running, that's why it only reads the first line of input in $temp variable.

How to Call Perl script from tcl script

I have a file with 4 perl commands ,
I want to open the file from the tcl and execute each perl command.
TCL script
runcmds $file
proc runcmds {file} {
set fileid [open $file "r"]
set options [read $fileid]
close $fileid
set options [split $options "\n"] #saperating each commad with new line
foreach line $options {
exec perl $line
}
}
when executing the above script
I am getting the error as "can't open the perl script /../../ : No Such file or directory " Use -S to search $PATH for it.
tl;dr: You were missing -e, causing your script to be interpreted as a filename.
To run a perl command from inside Tcl:
proc perl {script args} {
exec perl -e $script {*}$args
# or in 8.4: eval [list perl -e $script] $args
}
Then you can do:
puts [perl {
print "Hello "
print "World\n"
}]
That's right, an arbitrary perl script inside Tcl. You can even pass in other arguments as necessary; access from perl via #ARGV. (You'll need to add other options like -p explicitly.)
Note that this can pass whole scripts; you don't need to split them up (and probably shouldn't; you can do lots with one-liners but they tend to be awful to maintain and there's no technical reason to require it).

From Perl, spawn a shell, configure it, and fork the STDOUT

I use a Perl script to configure and spawn a compiled program, that needs a subshell configured a certain way, so I use $returncode = system("ulimit -s unlimited; sg ourgroup 'MyExecutable.exe'");
I want to capture and parse the STDOUT from that, but I need it forked, so that the output can be checked while the job is still running. This question comes close:
How can I send Perl output to a both STDOUT and a variable? The highest-rated answer describes a function called backtick() that creates a child process, captures STDOUT, and runs a command in it with exec().
But the calls I have require multiple lines to configure the shell. One solution would be to create a disposable shell script:
#disposable.sh
#!/bin/sh
ulimit -s unlimited
sg ourgroup 'MyExecutable.exe'
I could then get what I need either with backtick(disposable.sh) or open(PROCESS,'disposable.sh|').
But I'd really rather not make a scratch file for this. system() happily accepts multi-line command strings. How can I get exec() or open() to do the same?
If you want to use shell's power (that includes loops, variables, but also multiple command execution), you have to invoke the shell (open(..., 'xxx|') doesn't do that).
You can pass your shell script to the shell with the -c option of the shell (another possibility would be to pipe the commands to the shell, but that's more difficult IMHO).
That means calling the backtick function from the other answer like this:
backtick("sh", "-c", "ulimit -s unlimited; sg ourgroup 'MyExecutable.exe'");
The system tee with backticks will do this, no?
my $output = `(ulimit -s unlimited; sg ourgroup 'MyExecutable.exe') | tee /dev/tty`;
or modify Alnitak's backticks (so it does use a subshell)?
my $cmd = "ulimit -s unlimiited ; sg ourgroup 'MyExecutable.exe'";
my $pid = open(CMD, "($cmd) |");
my $output;
while (<CMD>) {
print STDOUT $_;
$output .= $_;
}
close CMD;
Expect should be used as you are interacting with your program: http://metacpan.org/pod/Expect
Assuming /bin/bash on your *nix matches something like bash-3.2$ the below program can be used to launch number of commands using $exp->send on bash console and output from each command can then be parsed for further actions.
#!/usr/bin/perl
use Expect;
my $command="/bin/bash";
my #parameters;
my $exp= new Expect;
$exp->raw_pty(1);
$exp->spawn($command);
$exp->expect(5, '-re', 'bash.*$');
$exp->send("who \n");
$exp->expect(10, '-re', 'bash.*$');
my #output = $exp->before();
print "Output of who command is #output \n";
$exp->send("ls -lt \n");
$exp->expect(10, '-re', 'bash.*$');
my #output = $exp->before();
print "Output of ls command is #output \n";

Why can't I get the output of a command with system() in Perl?

When executing a command on the command-line from Perl, is there a way to store that result as a variable in Perl?
my $command = "cat $input_file | uniq -d | wc -l";
my $result = system($command);
$result always turns out to be 0.
Use "backticks":
my $command = "cat $input_file | uniq -d | wc -l";
my $result = `$command`;
And if interested in the exit code you can capture it with:
my $retcode = $?;
right after making the external call.
From perlfaq8:
Why can't I get the output of a command with system()?
You're confusing the purpose of system() and backticks (````). system() runs a command and returns exit status information (as a 16 bit value: the low 7 bits are the signal the process died from, if any, and the high 8 bits are the actual exit value). Backticks (``) run a command and return what it sent to STDOUT.
$exit_status = system("mail-users");
$output_string = `ls`;
You can use the Perl back-ticks to run a shell command, and save the result in an array.
my #results = `$command`;
To get just a single result from the shell command, you can store it in a scalar variable:
my $result = `$command`;
If you are expecting back multiple lines of output, it's easier to use an array, but if you're just expecting back one line, it's better to use scalar.
(something like that, my perl is rusty)
You can use backticks, as others have suggested. That's fine if you trust whatever variables you're using to build your command.
For more flexibility, you can open the command as a pipe and read from that as you would a file. This is particularly useful when you want to pass variables as command line arguments to the program and you don't trust their source to be free of shell escape characters, as open in recent Perl (>= 5.8) has the capacity to invoke a program from an argument list. So you can do the following:
open(FILEHANDLE, '-|', 'uniq', 'some-file.txt') or die "Cannot fork: $!\n";
while (<FILEHANDLE>) {
# process a line from $_
}
close FILEHANDLE or die "child error: $!\n";
IPC::System::Simple provides the 'capture' command which provides a safe, portable alternative to backticks. It (and other commands from this module) are highly recommended.
I'm assuming this a 'contrived' example, because this is
equivalent: 'uniq -d $input_file | wc -l'.
In almost all my experience, the only reason for putting the results
in to a perl variable, is to parse the later. In that case, I use
the following pattern:
$last = undef;
$lc = 0;
open(FN, "$input_file");
while (<FN>) {
# any other parsing of the current line
$lc++ if ($last eq $_);
$last = $_;
}
close(FN);
print "$lc\n";
This also has the added advantages:
no fork for shell, cat, uniq, and wc
faster
parse and collect the desired input