Running Perl debugger twice - perl

I have a case where I invoke the Perl debugger twice. For example, progA.pl:
use warnings;
use strict;
system("perl -d progB.pl");
and progB.pl:
use warnings;
use strict;
$DB::single=1;
print "Hello\n";
Then I run progA.pl like:
$ perl -d progA.pl
This does not work very well. On my system (Ubuntu 14.04, and Perl version 5.18), I get some errors from the debugger. For example:
### Forked, but do not know how to create a new TTY. ######### Since two debuggers fight for the same TTY, input is severely
entangled.
I know how to switch the output to a different window in xterms,
OS/2 consoles, and Mac OS X Terminal.app only. For a manual switch,
put the name of the created TTY in $DB::fork_TTY, or define a
function DB::get_fork_TTY() returning this.
On UNIX-like systems one can get the name of a TTY for the given
window by typing tty, and disconnect the shell from TTY by sleep
1000000.
It also tries to open a new terminal window, with title Dauther Perl debugger but the new terminal only show the error sh: 1: 3: Bad file descriptor.
How can these problems be avoided? I just want the debugger to work as normal.

Use 'do' instead of 'system'
perl -d progA.pl
# will stop after your $DB::single = 1
# line in progB.pl
perldoc -f do
do EXPR Uses the value of EXPR as a filename and executes the contents
of the file as a Perl script.
do 'stat.pl';
is just like
eval `cat stat.pl`;

I'm not sure if this is what you are looking for, because I don't understand the bigger picture of what you are trying to do.
But if you use a different debugger like Devel::Trepan at the outset, then things may work:
$ trepan.pl progA.pl
-- main::(progA.pl:4 #0x21282c8)
system("perl -d progB.pl");
(trepanpl): s
-- main::(progB.pl:3 #0x7042a8)
$DB::single=1;
(trepanpl): s
-- main::(progB.pl:4 #0x878be8)
print "Hello\n";
(trepanpl): s
Hello
Debugged program terminated. Use 'q' to quit or 'R' to restart.
(trepanpl): quit
trepan.pl: That's all, folks...
Debugged program terminated. Use 'q' to quit or 'R' to restart
(trepanpl) quit
trepan.pl: That's all, folks...
The (trepanpl) prompt after the "program terminated" message is a bit odd. But all it means here is that progB.pl is finished. After quitting that, as I did above, if you had another Perl statement after your system() command, then debugger show that rather give the second "finished" message.
Another feature of Devel::Trepan is that you can do nested debugging from inside that debugger with its debug command. Here is an example of that:
trepan.pl progA.pl
-- main::(progA.pl:4 #0x10612c8)
system("perl -d progB.pl");
set auto eval is on.
(trepanpl): debug system("perl -d progB.pl")
-- main::((eval 1437)[/usr/local/share/perl/5.18.2/Devel/Trepan/DB/../../../Devel/Trepan/DB/Eval.pm:129] remapped /tmp/HSXy.pl:6 #0x51f13e0)
system("perl -d progB.pl")
((trepanpl)): s
-- main::(progB.pl:3 #0x9382a8)
$DB::single=1;
(trepanpl): s
-- main::(progB.pl:4 #0xaacbe8)
print "Hello\n";
(trepanpl): s
Hello
Debugged program terminated. Use 'q' to quit or 'R' to restart.
(trepanpl): quit
trepan.pl: That's all, folks...
$DB::D[0] = 0
Leaving nested debug level 1
-- main::(progA.pl:4 #0x10612c8)
system("perl -d progB.pl");
(trepanpl):

Related

Perl -- command executing inside a script hangs

When I run the following script, it does exactly what I want it to do and exits:
setDisplay.sh:
#!/bin/bash
Xvfb -fp /usr/share/fonts/X11/misc/ :22 -screen 0 1024x768x16 2>&1 &
export DISPLAY=:22
When I run ./setDisplay.sh, everything works fine.
OK, here's where the fun starts...
I have a Perl script that calls setDisplay...
Here is the eamorr.pl script:
#!/usr/bin/perl
use strict;
use warnings;
my $homeDir="/home/eamorr/Dropbox/site/";
my $cmd;
my $result;
print "-----Setting display...\n";
$cmd="sh $homeDir/setDisplay.sh";
print $cmd."\n";
$result=`$cmd`;
print $result;
It just hangs when I run ./eamorr.pl
I'm totally stuck...
When you do this:
$result=`$cmd`;
a pipe is created connecting the perl process to the external command, and perl reads from that pipe until EOF.
Your external command creates a background process which still has the pipe on its stdout (and also its stderr since you did 2>&1). There will be no EOF on that pipe until the background process exits or closes its stdout and stderr or redirects them elsewhere.
If you intend to collect the stdout and stderr of Xvfb into the perl variable $result, you'll naturally have to wait for it to finish. If you didn't intend that, I can't guess what you were trying to do with the 2>&1.
Also a script that ends with an export command is suspect. It can only modify its own environment, and then it immediately exits so there's no noticeable effect. Usually that's a sign that someone is trying to modify the parent process's environment, which is not possible.

Invoking Perl debugger so that it runs until first breakpoint

When I invoke my Perl debugger with perl -d myscript.pl, the debugger starts, but it does not execute any code until I press n (next) or c (continue).
Is there anyway to invoke the debugger and have it run through the code by default until it hits a breakpoint?
If so, is there any statement that I can use in my code as a breakpoint to have the debugger stop when it hits it?
Update:
Here is what I have in my .perldb file:
print "Reading ~/.perldb options.\n";
push #DB::typeahead, "c";
parse_options("NonStop=1");
Here is my hello_world.pl file:
use strict;
use warnings;
print "Hello world.\n";
$DB::single=1;
print "How are you?";
Here is the debugging session from running: perl -d hello_world.pl:
Reading ~/.perldb options.
Hello world
main::(hello_world.pl:6): print "How are you?";
auto(-1) DB<1> c
Debugged program terminated. Use q to quit or R to restart,
use o inhibit_exit to avoid stopping after program termination,
h q, h R or h o to get additional info.
DB<1> v
9563 9564
9565 sub at_exit {
9566==> "Debugged program terminated. Use `q' to quit or `R' to restart.";
9567 }
9568
9569 package DB; # Do not trace this 1; below!
DB<1>
In other words, my debugger skips print "How are you?", and instead stops once the program finishes, which is not what I want.
What I want is have the debugger run my code without stopping anywhere (nor at the beginning, nor at the end of my script), unless I explicitly have a $DB::single=1; statement, in which case I would like it to stop before running the next line. Any ways to do this?
For reference, I am using:
$perl --version
This is perl 5, version 14, subversion 1 (v5.14.1) built for x86_64-linux
Put
$DB::single = 1;
before any statement to set a permanent breakpoint in your code.
This works with compile-time code, too, and may be the only good way to set a breakpoint during the compile phase.
To have the debugger automatically start your code, you can manipulate the #DB::typeahead array in either a .perldb file or in a compile-time (BEGIN) block in your code. For example:
# .perldb file
push #DB::typeahead, "c";
or
BEGIN { push #DB::typeahead, "p 'Hello!'", "c" }
...
$DB::single = 1;
$x = want_to_stop_here();
There is also a "NonStop" option that you could set in .perldb or in the PERLDB_OPTS environment variable:
PERLDB_OPTS=NonStop perl -d myprogram.pl
All of this (and much more) is discussed deep in the bowels of perldebug and perl5db.pl
Update:
To address the issues raised in the most recent update. Use the following in ./perldb:
print "Reading ~/.perldb options.\n";
push #DB::typeahead, "c";
parse_options("inhibit_exit=0");
Also check out Enbugger. And while on the topic of debuggers, see Devel::Trepan which also works with Enbugger.

jsvc (tomcat) does not daemonize properly when run with backticks and then defuncts

In debian lenny, when running /etc/init.d/tomcat5.5 start, it runs jsvc and expects it to daemonize itself.
From a simple bash shell, this works fine.
However, from a script, this gets completely stuck:
For example, the following works like a charm:
#!/usr/bin/perl
my $cmd = '/etc/init.d/tomcat5.5 start';
system($cmd);
However, the following gets stuck as jsvc does not daemonize:
#!/usr/bin/perl
my $cmd = '/etc/init.d/tomcat5.5 start';
`$cmd`;
It also gets stuck when running it using backticks in bash:
#!/bin/bash
CMD='/etc/init.d/tomcat5.5 start'
`$CMD`
Is this a bug in jsvc? Any idea why this works in a shell or using system() , but not using backticks? I am actually getting defunct/zombie processes because of this issue.
Just a hunch -- for a job to become a daemon it needs to close any file descriptors that were opened in its parent process. Perhaps this is easier to do with system than with backticks/readpipe, though I can't come up with any good reasons why that would be so. What if you used the backticks like:
`$CMD < /dev/null > /dev/null 2>&1`
Backticks will evaluate to the output of the command, if there's lots of data, you may fill the buffer. No need to use the backticks if you don't want to evaluate or catpure the output in the script itself.
In example, this bash script should work:
#!/bin/bash
CMD="/etc/init.d/tomcat5.5 start"
# note no backticks
$CMD
Also please define "daemonize"? You want this nohup'd and asynchronous?

perl fork doesn't work properly when run remotely (via ssh)

I have a perl script, script.pl which, when run, does a fork, the parent process outputs its pid to a file then exits while the child process outputs something to STOUT and then goes into a while loop.
$pid = fork();
if ( ! defined $pid )
{
die "Failed to fork.";
}
#Parent process
elsif($pid)
{
if(!open (PID, ">>running_PIDs"))
{
warn "Error opening file to append PID";
}
print PID "$pid \n";
close PID;
}
#child process
else
{
print "Output started";
while($loopControl)
{
#Do some stuff
}
}
This works fine when I call it locally ie: perl script.pl.
The script prints out some things then returns control back to the shell. (while the child process goes off into its loop in the background).
However, when I call this via ssh control is never returned back to the shell (nor is the "Output started" line ever printed.
ie:
$ ssh username#example.com 'perl script.pl'
However, the interesting thing is, the child process does run (I can see it when I type ps).
Can anyone explain whats going on?
EDIT:
I ran it under debug and got this:
### Forked, but do not know how to create a new TTY.
Since two debuggers fight for the same TTY, input is severely entangled.
I know how to switch the output to a different window in xterms
and OS/2 consoles only. For a manual switch, put the name of the created TTY
in $DB::fork_TTY, or define a function DB::get_fork_TTY() returning this.
On UNIX-like systems one can get the name of a TTY for the given window
by typing tty, and disconnect the shell from TTY by sleep 1000000.
Whenever you launch background jobs via non-interactive ssh commands, you need to close or otherwise tie off stdin, stdout, & stderr. Otherwise ssh will wait for the backgrounded process to exit. FAQ.
This is called disassociating or detaching from the controlling terminal and is a general best practice when writing background jobs, not just for SSH.
So the simplest change that doesn't mute your entire command is to add:
#close std fds inherited from parent
close STDIN;
close STDOUT;
close STDERR;
right after your print "Output started";. If your child process needs to print output periodically during its run, then you'll need to redirect to a log file instead.
ssh username#example.com 'nohup perl script.pl'
You aren't able to exit because there's still a process attached. You need to nohup it.
What is happening is that ssh is executing 'perl script.pl' as a command directly. If you have 'screen' available, you could do:
$ ssh username#example.com 'screen -d -m perl script.pl'
to have it running on a detached screen, and reattach later with screen -r
To understand this better I would recommend reading #Jax's solution on
Getting ssh to execute a command in the background on target machine
It's not to do with Perl. It's becaue of the way SSH handles any long-running process you're trying to background.
I need to launch script.pl from a bash script (to define essential local variables on the target host):
$ ssh username#example.com /path/to/launcher.sh
/path/to/launcher.sh was invoking the Perl script with:
CMD="/path/to/script.pl -some_arg=$VALUE -other_arg"
$CMD &
which worked locally, but when run via ssh it didn't return.
I tried #pra's solution inside the Perl script, but it didn't work in my case.
Using #Jax's solution, I replaced $CMD & with this:
nohup $CMD > /path/to/output.log 2>&1 < /dev/null &
and it works beautifully.

How can I check the status of the first program in pipeline in Perl's system()?

perl -e 'system ("crontab1 -l");print $?'
returns -1 as expected (program crontab1 doesn't exist)
perl -e 'system ("crontab1 -l|grep blah");print $?'
returns 256.
What is the way to check the status of the first (or both) programs?
You are getting the exit status of the entire command, just as you should. If you want the exit status separately, you're going to have to run the commands separately.
#!/usr/bin/perl -e
system("crontab1 -l > /tmp/junk.txt"); print $?;
system("grep blah /tmp/junk.txt"); print $?;
as an example.
If you tolerate using something other than system, there are easier solutions. For example, the results method in IPC::Run returns all exit codes of the chain.
Remember, you're supposed to use $?>>8 to get the exit code, not $?
perl -e 'system("set -o pipefail;false | true");print $?>>8,"\n"'
1
This ("pipefail") only works if your shell is bash 3. Cygwin and some Linux flavors ship with it; not sure about mac.
You should be aware that 256 is an error return. 0 is what you'd get on success:
perl -e 'system("true");print $?>>8,"\n"'
0
I didn't know system returned -1 for a single command that isn't found, but $?>>8 should still be non-zero in that case.
[This was composed as an answer to another question which was closed as a duplicate of this one.]
Executing a shell command requires executing a shell. To that end,
system($shell_command)
is equivalent to
system('/bin/sh', '-c', $shell_command)
As such, all of your examples run a single program (/bin/sh). If you want the exit statuses of multiple children, you're going to need to have multiple children!
use IPC::Open3 qw( open3 );
open(local *CHILD1_STDIN, '<', '/dev/null')
or die $!;
pipe(local *CHILD2_STDIN, local *CHILD1_STDOUT)
or die $!;
my $child1_pid = open3(
'<&CHILD1_STDIN',
'>&CHILD1_STDOUT',
'>&STDERR',
'prog1', 'arg1', 'arg2',
);
my $child2_pid = open3(
'<&CHILD2_STDIN',
'>&STDOUT',
'>&STDERR',
'prog2', 'arg1', 'arg2',
);
my #pipe_status = map { waitpid($_, 0) } $child1_pid, $child2_pid;
IPC::Open3 is rather low level. IPC::Run3 and/or IPC::Run can possibly make this easier. [Update: Indeed, IPC::Run does].
If you want to check the status, don't put them all in the same system. Open a reading pipe to the first program to get its output then open another pipe to the other program.
The operating system returns an exit status only for the last program executed and if the OS doesn't return it, perl can't report it.
I don't know of any way to get at the exit code returned by earlier programs in a pipeline other than by running each individually and using temporary files instead of pipes.
What is the way to check the status of the first (or both) programs?
There is no such way, at least, not as you have constructed things. You may have to manage sub-processes yourself via fork(), exec() and waitpid() if you must know these things.
Here is roughly what is happening in your code fragment.
perl's system() spawns a shell, and perl *wait()*s for that subprocess to terminate.
The shell sets up a pipeline:
a subshell *exec()*s grep on the read-end of the pipe
a subshell fails to locate crontab1 anywhere in $PATH, and *exit()*s 127 (on my system, that is, where 127 is the shell indicating failure to find a program to run).
grep detects end-of-file on its input and, having matched nothing, *exit()*s 1.
The shell *exit()*s with the exit code of the last process in the pipeline, which, again, is 1.
perl detects the shell's exit code of 1, which is encoded in $? as 256.
(256 >> 8 == 1)