Propagate exit status across pipes - perl

I would like to use a small script to do some cosmetic work to the output of my gcc.
So I use this command:
mygcc foo.c 2>&1 | myscript.pl
Basically my script does things like this:
$error = 0;
while(<>)
{
s/^"(.*)"\s*,\s*line\s*(\d+)\s*:\s*(cc\d+)\s*:/colored("[$3]", 'bold red').colored(" $1", 'red').":".colored("$2", 'yellow')/ge;
s/ \^/colored(" ^", 'yellow')/e;
s/(error:.*$)/colored($1, 'red')/ge;
s/(warning.*$)/colored($1, 'yellow')/ge;
print;
$error = -1;
}
Unfortunately the exit code from gcc is not correctly propagated through the pipe. What I need to do is to get the exit code from gcc and write it back from my script.
Without this, make won't correctly stop the build process in case of an error.
How can I achieve this?

Try using a sub shell:
( mygcc foo.c; echo "gcc returned $?" ) |& myscript.pl
The ( cmd ) construct is used to launch cmd in a sub-shell. Your current shell will fork itself, and the commands will be executed by the child shell. It's an easy way to run multiple commands and have the output fed to a pipe.
The $? variable is the exit status of the last command.
The cmd1 |& cmd1 construct is equivalent to cmd1 2>&1 | cmd2

Take a look at this. You can then use the %ENV variable to access the gcc return status and return that value from your perl script.

Related

How Perl can execute a command in the same shell with it?

I am not sure whether the title is really make sense to this problem. My problem is simple, I want to write a perl script to change my current directory and hope the result can be kept after calling the perl script. The script looks like this:
if ($#ARGV != 0) {
print "usage: mycd <dir symbol>";
exit -1;
}
my $dn = shift #ARGV;
if ($dn eq "kite") {
my $cl = `cd ./private`;
print $cl."\n";
}
else {
print "unknown directory symbol";
exit -1;
}
However, my current directory doesn't change after calling the script. What is the reason? How can I resolve it?
No, the Perl script will be run in a subprocess so it will not be able to affect the environment of the process that called it.
There are various tricks you can use such as sourcing shell scripts (in the context of the current shell rather than a sub-process), or using bash functions and aliases, but they won't work here.
How Perl can execute a command in the same shell with it?
Unless you have a very atypical shell, shells can only receive commands via STDIN, via its command line, and possibly via a command evaluation builtin.
The first two are out unless the Perl script is the parent of the shell, but you could use the third one indirectly as in the following example.
script.pl:
#!/usr/bin/perl
print "chdir 'private'\n";
bash script:
echo "$PWD" # /some/dir
eval "$( script.pl )"
echo "$PWD" # /some/dir/private
Of course, if you use bash, you could hide the details in a shell function.
mycd () {
eval "$( mycd.pl "$#" )"
}
Allowing you use to use
mycd
or even
mycd foo

From Perl, spawn a shell, configure it, and fork the STDOUT

I use a Perl script to configure and spawn a compiled program, that needs a subshell configured a certain way, so I use $returncode = system("ulimit -s unlimited; sg ourgroup 'MyExecutable.exe'");
I want to capture and parse the STDOUT from that, but I need it forked, so that the output can be checked while the job is still running. This question comes close:
How can I send Perl output to a both STDOUT and a variable? The highest-rated answer describes a function called backtick() that creates a child process, captures STDOUT, and runs a command in it with exec().
But the calls I have require multiple lines to configure the shell. One solution would be to create a disposable shell script:
#disposable.sh
#!/bin/sh
ulimit -s unlimited
sg ourgroup 'MyExecutable.exe'
I could then get what I need either with backtick(disposable.sh) or open(PROCESS,'disposable.sh|').
But I'd really rather not make a scratch file for this. system() happily accepts multi-line command strings. How can I get exec() or open() to do the same?
If you want to use shell's power (that includes loops, variables, but also multiple command execution), you have to invoke the shell (open(..., 'xxx|') doesn't do that).
You can pass your shell script to the shell with the -c option of the shell (another possibility would be to pipe the commands to the shell, but that's more difficult IMHO).
That means calling the backtick function from the other answer like this:
backtick("sh", "-c", "ulimit -s unlimited; sg ourgroup 'MyExecutable.exe'");
The system tee with backticks will do this, no?
my $output = `(ulimit -s unlimited; sg ourgroup 'MyExecutable.exe') | tee /dev/tty`;
or modify Alnitak's backticks (so it does use a subshell)?
my $cmd = "ulimit -s unlimiited ; sg ourgroup 'MyExecutable.exe'";
my $pid = open(CMD, "($cmd) |");
my $output;
while (<CMD>) {
print STDOUT $_;
$output .= $_;
}
close CMD;
Expect should be used as you are interacting with your program: http://metacpan.org/pod/Expect
Assuming /bin/bash on your *nix matches something like bash-3.2$ the below program can be used to launch number of commands using $exp->send on bash console and output from each command can then be parsed for further actions.
#!/usr/bin/perl
use Expect;
my $command="/bin/bash";
my #parameters;
my $exp= new Expect;
$exp->raw_pty(1);
$exp->spawn($command);
$exp->expect(5, '-re', 'bash.*$');
$exp->send("who \n");
$exp->expect(10, '-re', 'bash.*$');
my #output = $exp->before();
print "Output of who command is #output \n";
$exp->send("ls -lt \n");
$exp->expect(10, '-re', 'bash.*$');
my #output = $exp->before();
print "Output of ls command is #output \n";

Execute a command using a perl script

I have simple command in unix like
cat myfile.txt >&mytemp.txt&
The above command will simply create a copy of the file myfile.txt.
when i execute the command on the command line it returns me the process id like below:
> cat myfile.txt > & mytemp.txt &
[1] 769
>
I am forming the same command inside a perl script and calling it with system as below:
my $cmd="cat myfile.txt>&mytemp.txt&";
my $info = system("$cmd");
but the sytem command fails with the below error message:
sh: mytemp.txt: bad number
I even tried with escaping the > and &.But there is no change in the error message.
May i know the reason for this?where am i wrong here?
I'm pretty sure that you can't use the trailing & on this. If you want your program to continue while the command runs, then fork and have the child process run the call, then exit. Possibly exec can do this, though I haven't tried doing that with output redirection before...
Like the message says, that's not a valid sh command. Is it perhaps a csh command?
system('csh', '-c', $cmd);
Try this:
perl -e "`cat myfile.txt>&mytemp.txt&`;"
It's executing the command and returning the command output.
So it's possible to do:
#!/usr/bin/perl
my $content = `cat /etc/passwd`;
print $content;
If you put the code into a perl script:
cat-test.pl
#!/usr/bin/perl
use strict;
use warnings;
my $res = `cat myfile.txt>&mytemp.txt&`;

troubles while redirecting stderr in csh

I'm writing a Perl script that should execute commands in shell and parse their output. As a shell I'm intended to use csh. I've started with this
my $out = `cmd`
but it doesn't capture STDERR, which I need too. Running sh with output redirection does nothing
my $out = `sh -c "cmd 2>&1"`
still captures only STDOUT, not STDERR.
Even redirecting to file in csh doesn't work for me
tcsh$ cmd >& logfile.log
still captures STDOUT only %)
The command I'm trying to execute is actuallty sh script and some commands in this script print into STDERR and I want to capture that output. If I execute sh -c "cmd 2>/dev/null" STDERR actually goes to /dev/null and only STDOUT is printed in terminal.
Could anyone help me with this?
I believe there is something you are not telling us. Are you on cygwin? Or Windows? Do you have a PERL5SHELL environment variable set?
There is something that you are not telling us because both of these work fine on the five platforms I can easily test on:
% perl -le '$out = `sh -c "grep missing /dev/nowhere 2>&1" | cat -n`; chomp $out; print "got <<<$out>>>"'
got <<< 1 grep: /dev/nowhere: No such file or directory>>>
But in far, there is no reason to call sh(1) explicitly for shelling out. That’s because Perl always calls sh(1) for all its backtick, pipe opens, and system() shell-outs:
% perl -le '$out = `grep missing /dev/nowhere 2>&1 | cat -n`; chomp $out; print "got <<<$out>>>"'
got <<< 1 grep: /dev/nowhere: No such file or directory>>>
The only except to this I can think of occurs on non-Unix systems, where because they have no /bin/sh, something else is defined.
But under no circumstances will simple shell-outs be calling tcsh(1) behind your back. You’d’ve had to’ve seriously hacked the perl(1) source to get that to happen. I also rather doubt you could (easily) hack the binary, since the string "/bin/tcsh" is going to be longer than "/bin/sh", and it isn’t very often going to be found in /bin/ anyway.
That you can’t get stderr redirection working even from the shell says something pretty weird is going on. I think we need more information.
Here, you are capturing the STDOUT of sh, which is not the STDERR of cmd:
my $out = `sh -c "cmd 2>&1"`;
Can you just run cmd directly?
my $out = `cmd 2>&1`;
Backquotes capture STDOUT not STDERR.
system will dump both stdout and stderr to their parent's settings.
If you want to capture STDERR, you need something like IPC::Open3:
Extremely similar to open2(), open3() spawns the given $cmd and connects CHLD_OUT for reading from the child, CHLD_IN for writing to the child, and CHLD_ERR for errors. If CHLD_ERR is false,
You said that running the command cmd >& logfile.log in tcsh sends only cmd's stdout to the log file, not its stderr. That doesn't make sense.
Try replacing cmd with the following script:
#!/bin/sh
echo stdout
echo STDERR 1>&2
Both "stdout" and "STDERR" should show up in logfile.log.
If so, then perhaps your "cmd" is doing something odd. My best guess is that cmd is writing to /dev/tty, not to either stdout or stderr; that wouldn't be affected by redirection.
To see what I mean, add this line to the above script:
echo tty > /dev/tty
I don't really have time to mock up an example as I normally would, nor even test one. I am thinking that you might try using Capture::Tiny to see if that helps.

How do I test if a perl command embedded in a bash script returns true?

So, I have a bash script inside of which I'd like to have a conditional which depends on what a perl script returns. The idea behind my code is as follows:
for i in $(ls); do
if $(perl -e "if (\$i =~ /^.*(bleh|blah|bluh)/) {print 'true';}"); then
echo $i;
fi;
done
Currently, this always returns true, and when I tried it with [[]] around the if statement, I got errors. Any ideas anyone?
P.s. I know I can do this with grep, but it's just an example. I'd like to know how to have Bash use Perl output in general
P.p.s I know I can do this in two lines, setting the perl output to a variable and then testing for that variables value, but I'd rather avoid using that extra variable if possible. Seems wasteful.
If you use exit, you can just use an if directly. E.g.
if perl -e "exit 0 if (successful); exit 1"; then
echo $i;
fi;
0 is success, non-zero is failure, and 0 is the default if you don't call exit.
To answer your question, you want perl to exit 1 for failure and exit 0 for success. That being said, you're doing this the wrong way. Really. Also, don't parse the output of ls. You'll cause yourself many headaches.
for file in *; do
if [[ $file = *bl[eau]h ]]; then
echo "$file matches"
fi
done
for file in * ; do
perl -e "shift =~ /^.*(bleh|blah|bluh)/ || exit 1" "$file" && echo $file: true
done
You should never parse the output of ls. You will have, at least, problems with file names containing spaces. Plus, why bother when your shell can glob on its own?
Quoting $file when passing to the perl script avoids problems with spaces in file names (and other special characters). Internally I avoided expanding the bash $file variable so as to not run afoul of quoting problems if the file name contained ", ' or \
Perl seems to (for some reason) always return 0 if you don't exit with an explicit value, which seems weird to me. Since this is the case I test for failure inside the script and return nonzero in that case.
The return value of the previous command is stored in the bash variable $?. You can do something like:
perl someargs script.pl more args
if [ $? == 0 ] ; then
echo true
else
echo false
fi
It's a good question, my advice is: keep it simple and go Posix (avoid Bashisms1) where possible..
so ross$ if perl -e 'exit 0'; then echo Good; else echo Bad; fi
Good
so ross$ if perl -e 'exit 1'; then echo Good; else echo Bad; fi
Bad
1. Sure, the OP was tagged bash, but others may want to know the generic-Posix form.