Perl -- command executing inside a script hangs - perl

When I run the following script, it does exactly what I want it to do and exits:
setDisplay.sh:
#!/bin/bash
Xvfb -fp /usr/share/fonts/X11/misc/ :22 -screen 0 1024x768x16 2>&1 &
export DISPLAY=:22
When I run ./setDisplay.sh, everything works fine.
OK, here's where the fun starts...
I have a Perl script that calls setDisplay...
Here is the eamorr.pl script:
#!/usr/bin/perl
use strict;
use warnings;
my $homeDir="/home/eamorr/Dropbox/site/";
my $cmd;
my $result;
print "-----Setting display...\n";
$cmd="sh $homeDir/setDisplay.sh";
print $cmd."\n";
$result=`$cmd`;
print $result;
It just hangs when I run ./eamorr.pl
I'm totally stuck...

When you do this:
$result=`$cmd`;
a pipe is created connecting the perl process to the external command, and perl reads from that pipe until EOF.
Your external command creates a background process which still has the pipe on its stdout (and also its stderr since you did 2>&1). There will be no EOF on that pipe until the background process exits or closes its stdout and stderr or redirects them elsewhere.
If you intend to collect the stdout and stderr of Xvfb into the perl variable $result, you'll naturally have to wait for it to finish. If you didn't intend that, I can't guess what you were trying to do with the 2>&1.
Also a script that ends with an export command is suspect. It can only modify its own environment, and then it immediately exits so there's no noticeable effect. Usually that's a sign that someone is trying to modify the parent process's environment, which is not possible.

Related

Perl script interacting with another program's STDIN

I have a Perl script that calls another program with backticks, and checks the output for certain strings. This is running fine.
The problem I have is when the other program fails on what it is doing and waits for user input. It requires the user to press enter twice before quitting the program.
How do I tell my Perl script to press enter twice on this program?
The started command receives the same STDIN and STDERR from your script, just STDOUT is piped to your script.
You could just close your STDIN before running the command and there will be no input source. Reading from STDIN will cause an error and the called command will exit:
close STDIN;
my #slines = `$command`;
This will also void any chance of console input to your script.
Another approach would use IPC::Open2 which allows your script to control STDIN and STDOUT of the command at the same time:
use IPC::Open2;
open2($chld_in, $chld_in, 'some cmd and args');
print $chld_in "\n\n";
close $chld_in;
#slines = <$chld_out>;
close $chld_out;
This script provides the two \n input needed by the command and reads the command output.
You could just pipe them in:
echo "\n\n" | yourcommand

How to capture STDOUT from executable (cap) executed within a perl script executed from a crontab

Whew that is a long-winded title. But it explains my issue:
I have a crontab that runs a perl script.
That perl script runs a cap task, which outputs to STDOUT some status messages.
The perl script is supposed to capture the STDOUT (currently using backticks) from cap and parse it.
Now, this works 100% fine when I run the script from a bash user. However, when I run the script from a crontab, the perl script doesn't capture any output from the cap task.
Has anyone dealt with anything like this before? Thanks.
Maybe your cap executables are died without emitting any message to stdout. Did you checking the success state of execution?
Could you tried this?
$check_result = `$cmd 2>&1`;
if ($?){
die "$cmd failed with $check_result, $!";
}

troubles while redirecting stderr in csh

I'm writing a Perl script that should execute commands in shell and parse their output. As a shell I'm intended to use csh. I've started with this
my $out = `cmd`
but it doesn't capture STDERR, which I need too. Running sh with output redirection does nothing
my $out = `sh -c "cmd 2>&1"`
still captures only STDOUT, not STDERR.
Even redirecting to file in csh doesn't work for me
tcsh$ cmd >& logfile.log
still captures STDOUT only %)
The command I'm trying to execute is actuallty sh script and some commands in this script print into STDERR and I want to capture that output. If I execute sh -c "cmd 2>/dev/null" STDERR actually goes to /dev/null and only STDOUT is printed in terminal.
Could anyone help me with this?
I believe there is something you are not telling us. Are you on cygwin? Or Windows? Do you have a PERL5SHELL environment variable set?
There is something that you are not telling us because both of these work fine on the five platforms I can easily test on:
% perl -le '$out = `sh -c "grep missing /dev/nowhere 2>&1" | cat -n`; chomp $out; print "got <<<$out>>>"'
got <<< 1 grep: /dev/nowhere: No such file or directory>>>
But in far, there is no reason to call sh(1) explicitly for shelling out. That’s because Perl always calls sh(1) for all its backtick, pipe opens, and system() shell-outs:
% perl -le '$out = `grep missing /dev/nowhere 2>&1 | cat -n`; chomp $out; print "got <<<$out>>>"'
got <<< 1 grep: /dev/nowhere: No such file or directory>>>
The only except to this I can think of occurs on non-Unix systems, where because they have no /bin/sh, something else is defined.
But under no circumstances will simple shell-outs be calling tcsh(1) behind your back. You’d’ve had to’ve seriously hacked the perl(1) source to get that to happen. I also rather doubt you could (easily) hack the binary, since the string "/bin/tcsh" is going to be longer than "/bin/sh", and it isn’t very often going to be found in /bin/ anyway.
That you can’t get stderr redirection working even from the shell says something pretty weird is going on. I think we need more information.
Here, you are capturing the STDOUT of sh, which is not the STDERR of cmd:
my $out = `sh -c "cmd 2>&1"`;
Can you just run cmd directly?
my $out = `cmd 2>&1`;
Backquotes capture STDOUT not STDERR.
system will dump both stdout and stderr to their parent's settings.
If you want to capture STDERR, you need something like IPC::Open3:
Extremely similar to open2(), open3() spawns the given $cmd and connects CHLD_OUT for reading from the child, CHLD_IN for writing to the child, and CHLD_ERR for errors. If CHLD_ERR is false,
You said that running the command cmd >& logfile.log in tcsh sends only cmd's stdout to the log file, not its stderr. That doesn't make sense.
Try replacing cmd with the following script:
#!/bin/sh
echo stdout
echo STDERR 1>&2
Both "stdout" and "STDERR" should show up in logfile.log.
If so, then perhaps your "cmd" is doing something odd. My best guess is that cmd is writing to /dev/tty, not to either stdout or stderr; that wouldn't be affected by redirection.
To see what I mean, add this line to the above script:
echo tty > /dev/tty
I don't really have time to mock up an example as I normally would, nor even test one. I am thinking that you might try using Capture::Tiny to see if that helps.

jsvc (tomcat) does not daemonize properly when run with backticks and then defuncts

In debian lenny, when running /etc/init.d/tomcat5.5 start, it runs jsvc and expects it to daemonize itself.
From a simple bash shell, this works fine.
However, from a script, this gets completely stuck:
For example, the following works like a charm:
#!/usr/bin/perl
my $cmd = '/etc/init.d/tomcat5.5 start';
system($cmd);
However, the following gets stuck as jsvc does not daemonize:
#!/usr/bin/perl
my $cmd = '/etc/init.d/tomcat5.5 start';
`$cmd`;
It also gets stuck when running it using backticks in bash:
#!/bin/bash
CMD='/etc/init.d/tomcat5.5 start'
`$CMD`
Is this a bug in jsvc? Any idea why this works in a shell or using system() , but not using backticks? I am actually getting defunct/zombie processes because of this issue.
Just a hunch -- for a job to become a daemon it needs to close any file descriptors that were opened in its parent process. Perhaps this is easier to do with system than with backticks/readpipe, though I can't come up with any good reasons why that would be so. What if you used the backticks like:
`$CMD < /dev/null > /dev/null 2>&1`
Backticks will evaluate to the output of the command, if there's lots of data, you may fill the buffer. No need to use the backticks if you don't want to evaluate or catpure the output in the script itself.
In example, this bash script should work:
#!/bin/bash
CMD="/etc/init.d/tomcat5.5 start"
# note no backticks
$CMD
Also please define "daemonize"? You want this nohup'd and asynchronous?

perl fork doesn't work properly when run remotely (via ssh)

I have a perl script, script.pl which, when run, does a fork, the parent process outputs its pid to a file then exits while the child process outputs something to STOUT and then goes into a while loop.
$pid = fork();
if ( ! defined $pid )
{
die "Failed to fork.";
}
#Parent process
elsif($pid)
{
if(!open (PID, ">>running_PIDs"))
{
warn "Error opening file to append PID";
}
print PID "$pid \n";
close PID;
}
#child process
else
{
print "Output started";
while($loopControl)
{
#Do some stuff
}
}
This works fine when I call it locally ie: perl script.pl.
The script prints out some things then returns control back to the shell. (while the child process goes off into its loop in the background).
However, when I call this via ssh control is never returned back to the shell (nor is the "Output started" line ever printed.
ie:
$ ssh username#example.com 'perl script.pl'
However, the interesting thing is, the child process does run (I can see it when I type ps).
Can anyone explain whats going on?
EDIT:
I ran it under debug and got this:
### Forked, but do not know how to create a new TTY.
Since two debuggers fight for the same TTY, input is severely entangled.
I know how to switch the output to a different window in xterms
and OS/2 consoles only. For a manual switch, put the name of the created TTY
in $DB::fork_TTY, or define a function DB::get_fork_TTY() returning this.
On UNIX-like systems one can get the name of a TTY for the given window
by typing tty, and disconnect the shell from TTY by sleep 1000000.
Whenever you launch background jobs via non-interactive ssh commands, you need to close or otherwise tie off stdin, stdout, & stderr. Otherwise ssh will wait for the backgrounded process to exit. FAQ.
This is called disassociating or detaching from the controlling terminal and is a general best practice when writing background jobs, not just for SSH.
So the simplest change that doesn't mute your entire command is to add:
#close std fds inherited from parent
close STDIN;
close STDOUT;
close STDERR;
right after your print "Output started";. If your child process needs to print output periodically during its run, then you'll need to redirect to a log file instead.
ssh username#example.com 'nohup perl script.pl'
You aren't able to exit because there's still a process attached. You need to nohup it.
What is happening is that ssh is executing 'perl script.pl' as a command directly. If you have 'screen' available, you could do:
$ ssh username#example.com 'screen -d -m perl script.pl'
to have it running on a detached screen, and reattach later with screen -r
To understand this better I would recommend reading #Jax's solution on
Getting ssh to execute a command in the background on target machine
It's not to do with Perl. It's becaue of the way SSH handles any long-running process you're trying to background.
I need to launch script.pl from a bash script (to define essential local variables on the target host):
$ ssh username#example.com /path/to/launcher.sh
/path/to/launcher.sh was invoking the Perl script with:
CMD="/path/to/script.pl -some_arg=$VALUE -other_arg"
$CMD &
which worked locally, but when run via ssh it didn't return.
I tried #pra's solution inside the Perl script, but it didn't work in my case.
Using #Jax's solution, I replaced $CMD & with this:
nohup $CMD > /path/to/output.log 2>&1 < /dev/null &
and it works beautifully.