I'm trying to find a way to start a perl process up under the debugger, but have in just start running automatically without stopping at the first statement and having me enter 'c' to start it. This module is run by a larger system and I need to be able to periodically (based on external conditions) interrupt the program via interrupt signal, examine some data structures and have it continue.
Obviously, I can have the supervising program start my process using "perl -d myProcess", but how to get it to just run without the initial break. Anybody know how to get this to happen?
Much Thanks.
Thanks. That was a big hint. I see several options including "NonStop".
It looks like using the line PERLDB_OPTS="NonStop" perl -d myprog.pl & does the trick. Then I just kill -INT <pid> and fg it to get it up in the debugger. After I 'c' to continue executing and bg it so it will continue.
You can also add this config option to the .perldb file. The format of the config file is a bit unusual and not so well documented, so here goes
DB::parse_options("NonStop=1");
Other useful options:
DB::parse_options("dumpDepth=3");
DB::parse_options("PrintRet=0");
$DB::deep = 1000;
Related
Environment: Linux CentOS 7 #HPC
Interface: command interface, no GUI
My PERL scripts have a logic error. It does not go through in a "foreach" loop. I use debugger command below:
perl -d /scripts_location/perlscripts.pl
However, it is run step by step. My scripts have almost thousand lines. Here is my questions:
How to debug my scripts from specific line?
How to figure out the loop cannot be run? And why the loop cannot be run?
Is there any resource to show debugger skills in a whole process?
I searched online. Most of them explain the commands. But few introduce the debug from the very beginning. I mean that first a simple program is given, set breakpoint or other label in the program, check output or trace error, and so on. After viewing websites, I am unable to know how to start debugging using PERL debugger. I used to debugging my program using output at specific lines to check the output is correct or not. However, current logic error cannot be figured out in this way.
Any further suggestion and help would be highly appreciated.
After starting the debugger, you can tell it to continue until it reaches a given line, e.g.
c 124
To figure out why a foreach loop isn't entered, check the loop's list before entering it. You can tell the debugger to evaluate the expression like this:
x #values
if the loop is something like foreach my $value (#values). It will probably tell you
empty array
To discover why the array is emtpy, you can try to "watch" the array
w #values
and then run the script with r. It will stop once any watched value changes.
Use h to view the help.
Before you answer, I'm not looking for the functionality of ; to suppress command line printing.
I have a set of scripts which are not mine and I do not have the ability to change. However, in my scripts I make a call to these other scripts through evalin('base', 'scriptName'). Unfortunately, these other scripts do a lot of unnecessary and ugly printing to the command window that I don't want to see. Without being able to edit these other scripts, I would like a way to suppress output to the command line for the time that these other scripts are executing.
One potential answer was to use evalc, but when I try evalc(evalin('base', 'scriptName')) MATLAB throws an error complaining that it cannot execute a script as a function. I'm hoping there's something like the ability to disable command window printing or else redirecting all output to some null file much like /dev/null in unix.
I think you just need to turn the argument in your evalc example into a string:
evalc('evalin(''base'', ''scriptName'')');
Have you tried this solution
here ?
echo off;
I don't know if it will fit your needs, but another solution can be to open a new session of Matlab, and use there only minimized -nodesktop form (-just the command window). You can run from there the annoying scripts, and work on the main session as usual.
The problem here is that the sessions can't be synchronized, so if you need to work with the results of the scripts all the time, it'll be a little bit complicated. Maybe you can save the result to disk, than call it from the main session...
But it mainly depends on your workflow with those scripts.
Every time when I finish running a matlab code collection on command line, when I exit matlab, the standard output just gets messed. I can still use the terminal window, but whatever I typed won't show up on the screen, leaving me either type with my eyes blind, or open up a new terminal and excessively cd to the old place.
This happens every single time when I use make to run a matlab collection, and since I'm working a lot on this, it turns out to be very annoying. Does anyone know what's the problem here and how should I fix it?
As was pointed out in the comments, the makescript is probably dumping "bad" characters to the terminal. You could prevent this (but possibly lose useful information) by redirecting the output - instead of sending it to the terminal window, you can send it to a file, or even /dev/null ("the great bit bucket in the sky").
The underlying problem, however, is that your makefile is even sending these characters to the terminal in the first place. I would recommend that you pipe the output to a file with something like make > myDump.txt, then examine the resulting file to see what is going on, and where in your makefile the problem is created. It is possible that you will still be getting some output when you do this - that's because by default > redirects stdout only, and not stderr - a second output stream used for error messages. You can redirect both to a file with make 2>&1 myDump.txt.
You have already seen the recommendation to use stty sane to restore the status of the terminal - I am repeating it here in case someone only looks at answers, and not at comments; but I don't take credit for it :-).
Got stuck with one problem in our live server.
Have script (perl) which runs almost 15 to 18 hrs a day. it creates 100+ sub process every day . One place it has command (product command which we run in command line solaris box) which is being triggerred with back ticks inside perl code.
It looks like the back ticks command gets skipped or failed randomly.
for eg. if i need to run for 50 customers 2 or 3 gets failed randomly.
I do not see the evidence that the command has been triggerred in anywhere.
since its live server we can't even try making much in code change until we are sure about the problem.
here is the code..
my $comm = "inventory -noX customer1"; #sample command i have given here
my $newLogFile = "To capture command output here we have path whre the file gets created");
my $piddy = `$comm 2>&1 > $newLogFile`;
Is it because of the back ticks it happens I am really not sure :(.
Also tried various analysis like memory/CPU/diskspace/Adding librtld_db.so in LD_LIBRARY_PATH etc....but no luck...Also the perl is in 64 bit ...what else Can i? :(
I suspect you are not checking for errors (and perl doesn't make that easy to do correctly for backticks).
Consider using IPC::System::Simple's capture in place of your backticks/qx.
As its doc says, "If there's an error, it will die with a detailed description of what went wrong."
It shouldn't fail just because of backticks, however because it is spawning a new process, that process may be periodically subject to failure due to system conditions (eg. sysLoad). Backticks are really a "fire and forget" method and should never be used for anything critical in a production environment. As previously suggested, there are far more detailed ways to manage spawning external processes.
If the command's output is being lost due to buffering, you might try turning off buffering, but keep an eye on it for performance degradation (it's usually not significant).
Buffering can be turned off for an entire script by adding this near the top:
$|=1;
When calling external commands, I'm using system of IPC::System::Simple or open3 of IPC::Open3.
I have Perl program based on IO::Async, and it sometimes just exits after a few hours/days without printing any error message whatsoever. There's nothing in dmesg or /var/log either. STDOUT/STDERR are both autoflush(1) so data shouldn't be lost in buffers. It doesn't actually exit from IO::Async::Loop->loop_forever - print I put there just to make sure of that never gets triggered.
Now one way would be to keep peppering the program with more and more prints and hope one of them gives me some clue. Is there better way to get information what was going on in a program that made it exit/silently crash?
One trick I've used is to run the program under strace or ltrace (or attach to the process using strace). Naturally that was under Linux. Under other operating systems you'd use ktrace or dtrace or whatever is appropriate.
A trick I've used for programs which only exhibit sparse issues over days or week and then only over handfuls among hundreds of systems is to direct the output from my tracer to a FIFO, and have a custom program keep only 10K lines in a ring buffer (and with a handler on SIGPIPE and SIGHUP to dump the current buffer contents into a file. (It's a simple program, but I don't have a copy handy and I'm not going to re-write it tonight; my copy was written for internal use and is owned by a former employer).
The ring buffer allows the program to run indefinitely with fear of running systems out of disk space ... we usually only need a few hundred, even a couple thousand lines of the trace in such matters.
If you are capturing STDERR, you could start the program as perl -MCarp::Always foo_prog. Carp::Always forces a stack trace on all errors.
A sudden exit without any error message is possibly a SIGPIPE. Traditionally SIGPIPE is used to stop things like the cat command in the following pipeline:
cat file | head -10
It doesn't usually result in anything being printed either by libc or perl to indicate what happened.
Since in an IO::Async-based program you'd not want to silently exit on SIGPIPE, my suggestion would be to put somewhere in the main file of the program a line something like
$SIG{PIPE} = sub { die "Aborting on SIGPIPE\n" };
which will at least alert you to this fact. If instead you use Carp::croak without the \n you might even be lucky enough to get the file/line number of the syswrite, etc... that caused the SIGPIPE.