I used Term::ShellUI and almost every thing
is working as expected but the issue is when I pressed Ctrl-C I want to
print:
Please use ctrl+d to exit the shell
For that I handle the signal but the message print only after I pressed the new line
How to resolve this?
You can do the same without using the IO::Handle library, by setting the $| variable to 1 before printing.
$SIG{INT} = sub {
$| = 1;
print "Please use ctrl+d to exit the shell";
}
Related
I am debugging a daemon and I'm trying to use print statements to output information to the terminal. The gist of my code is:
#!/usr/bin/env perl
use strict;
use warnings;
use Readonly;
Readonly my $TIMEOUT => ...;
...
while (1) {
print "DEBUG INFO";
...
sleep $TIMEOUT;
}
However, no output it getting printed to my terminal. Why is this?
Summary:
Use $| = 1 or add a newline, "\n" to the print.
Explanation:
The reason this isn't printing to the terminal is because perl is buffering the output for efficiency. Once the print buffer has been filled it will be flushed and the output will appear in your terminal. It may be desirable for you to force flushing the buffer, as depending on the length of $TIMEOUT you could be waiting for a considerable length of time for output!
There are two main approaches to flushing the buffer:
1) As you're printing to your terminal, then your filehandle is most likely STDOUT. Any file handles attached to the terminal are by default in line-buffered mode, and we can flush the buffer and force output by adding a newline character to your print statement:
while (1) {
print "DEBUG INFO\n";
...
sleep $TIMEOUT;
}
2) The second approach is to use $| which when set to non-zero makes the current filehandle (STDOUT by default or the last to be selected) hot and forces a flush of the buffer immediately. Therefore, the following will also force printing of the debug information:
$| = 1;
while (1) {
print "DEBUG INFO";
...
sleep $TIMEOUT;
}
If using syntax such as this is confusing, then you may like to consider:
use IO::Handle;
STDOUT->autoflush(1);
while (1) {
print "DEBUG INFO";
...
sleep $TIMEOUT;
}
In many code examples where immediate flushing of the buffer is required, you may see $|++ used to make a file-handle hot and immediately flush the buffer, and --$| to make a file-handle cold and switch off auto-flushing. See these two answers for more details:
Perl operator: $|++; dollar sign pipe plus plus
How does --$| work in Perl?
If you're interested in learning more about perl buffers, then I would suggest reading Suffering from Buffering, which gives great insight into why we have buffering and explains how to switch it on and off.
I want to display an iterative progress bar during the execution of a particular command in my Perl-CGI program. I use the CGI::ProgressBar module to achieve this. For example, if I want to show the progress bar during the execution of an RKHunter scan, this is the code I wrote:
use CGI::ProgressBar qw/:standard/;
$| = 1;
print progress_bar( -from=>1, -to=>100 );
open(my $quik_rk, '-|', 'rkhunter', '--enable', '"known_rkts"') or print "ERROR RUNNING BASIC ROOTKIT CHECK!!";
# print the progress bar while the command executes
while(<$quik_rk>)
{
print update_progress_bar;
#print "<img src=\"ajax-loader.gif\"></img>";
}
close($quik_rk);
This works fine. However, I try the same on another command(this one's to scan using Linux Maldet) immediate after the code above:
open(my $quik_lmd, '-|', 'maldet', '-a', '/home?/?/public_html') or print "ERROR RUNNING BASIC MALWARE CHECK!!";
my $this_ctr = 0;
while(<$quik_lmd>)
{ $this_ctr++;
print update_progress_bar;
}
close($quik_lmd);
The progress bar doesn't execute but te command itself runs in the background.
What am I doing wrong?
Is there a better way to show a progress bar on a browser in Perl-CGI?
I am not familiar with RKHunter, but based on your results my guess is that it outputs a line of text for each test it runs, while the other command does not.
Each line of text output by RKHunter will trigger the next iteration of <$quik_rk>.
The second command, <$quik_lmd>, it is likely silent, so it never triggers the loop. Once the command terminates, execution continues after your while.
The key bit here is "line of text". The <$filehandle> operator returns a line of text each time it sees a newline character. In order to do what you want using this construct, you would need to coerce the second command into being verbose about it's activities, and most importantly, to be verbose with a lot of newlines.
Alternatively, you can open a background process and use sleep to manage your loop, e.g.,
use strict;
use POSIX qw(WNOHANG);
my $pid = open(my $quik_rk, '-|', 'sleep', '5'); # replace with your command
do {
print "waiting\n"; # update_progress_bar;
sleep 1; # seconds between update
} while (waitpid($pid, WNOHANG)==0);
My understanding is that closing the handle for an IO::Pipe object should be done with the method ($fh->close) and not the built-in (close($fh)).
The other day I goofed and used the built-in out of habit on a IO::Pipe object that was opened to a command that I expected to fail. I was surprised when $? was zero, and my error checking wasn't triggered.
I realized my mistake. If I use the built-in, IO:Pipe can't perform the waitpid() and can't set $?. But what I was surprised by was that perl seemed to still close the pipe without setting $? via the core.
I worked up a little test script to show what I mean:
use 5.012;
use warnings;
use IO::Pipe;
say 'init pipes:';
pipes();
my $fh = IO::Pipe->reader(q(false));
say 'post open pipes:';
pipes();
say 'return: ' . $fh->close;
#say 'return: ' . close($fh);
say 'status: ' . $?;
say q();
say 'post close pipes:';
pipes();
sub pipes
{
for my $fd ( glob("/proc/self/fd/*") )
{
say readlink($fd) if -p $fd;
}
say q();
}
When using the method it shows the pipe being gone after the close and $? is set as I expected:
init pipes:
post open pipes:
pipe:[992006]
return: 1
status: 256
post close pipes:
And, when using the built-in it also appears to close the pipe, but does not set $?:
init pipes:
post open pipes:
pipe:[952618]
return: 1
status: 0
post close pipes:
It seems odd to me that the built-in results in the pipe closure, but doesn't set $?. Can anyone help explain the discrepancy?
Thanks!
If you look at the code for IO::Handle (of which IO::Pipe::End is a sub-class), you will see the following:
sub close {
#_ == 1 or croak 'usage: $io->close()';
my($io) = #_;
close($io);
}
It looks like $fh->close just calls close $fh. Of course, we should not be peeking behind the curtain.
We can see after IO::Pipe does a close $fh (behind the scenes), it then does a waitpid:
package IO::Pipe::End;
our(#ISA);
#ISA = qw(IO::Handle);
sub close {
my $fh = shift;
my $r = $fh->SUPER::close(#_); # <-- This just calls a CORE::close
waitpid(${*$fh}{'io_pipe_pid'},0)
if(defined ${*$fh}{'io_pipe_pid'});
$r;
}
Also interesting is this from the close Perldoc:
If the filehandle came from a piped open, close returns false if one of the other syscalls involved fails or if its program exits with non-zero status. If the only problem was that the program exited non-zero, $! will be set to 0 .
Closing a pipe also waits for the process executing on the pipe to exit --in case you wish to look at the output of the pipe
afterwards--and implicitly puts the exit status value of that command
into $? and ${^CHILD_ERROR_NATIVE} .
That answers your question right there.
But what I was surprised by was that perl seemed to still close the pipe without setting $? via the core.
Why would it? It has no way to know the process at the other end is a child, much less one for which the program should wait. Since it has no reason to call waitpid, $? isn't going to get set.
In fact, I doubt it wait for the process at the other end of the pipe even if it wanted to, because I doubt there's a way of obtaining the pid of the process at the other end of the pipe, because it's actually possible for there to be multiple processes at the other end of the pipe.
IO::Pipe::close only calls waitpid when IO::Pipe is used to "open a process".
Similarly, close only calls waitpid when open is used to "open a process".
A process "opened" using one method cannot be closed by the other.
It turns out that my confusion stems from a flawed assumption that the disappearing pipe coincided with a complete process termination. That appears to not be the case, as the process is still available for a wait().
> perl -MIO::Pipe -le 'my $io = IO::Pipe->reader(q(false)); close($io); print $?; print wait(); print $?'
0
8857
256
I'm using Term::ReadLine::Gnu and have run into a problem with signal handling. Given the script below and a TERM signal sent to the script, the handler for the TERM signal is not triggered until after the enter key is pressed. Using Term::ReadLine:Perl this does not occur.
I've read that Term::ReadLine::Gnu has its own internal signal handlers, but frankly I'm at a loss as to how to work with them.
I've reviewed http://search.cpan.org/~hayashi/Term-ReadLine-Gnu-1.20/Gnu.pm#Term::ReadLine::Gnu_Variables tried setting the rl_catch_signals variable to 0, but that didn't help. Ideally, I'd like to work with the Gnu signal handlers, but I'll settle for disabling them too.
To be absolutely specific, I need the TERM handler to trigger after the signal is received instead of waiting for the enter key to be pressed.
Any help or advice is certainly appreciated!
#!/usr/bin/perl
use strict;
use warnings;
use Term::ReadLine;
$SIG{TERM} = sub { print "I got a TERM\n"; exit; };
my $term = Term::ReadLine->new('Term1');
$term->ornaments(0);
my $prompt = 'cmd> ';
while ( defined (my $cmd = $term->readline($prompt)) ) {
$term->addhistory($cmd) if $cmd !~ /\S||\n/;
chomp($cmd);
if ($cmd =~ /^help$/) {
print "Help Menu\n";
}
else {
print "Nothing\n";
}
}
This is due to perl's default paranoid handling of signals - behind the scenes, perl blocks SIGTERM before starting the readline call and restores it when it's finished. See Deferred Signals in perlipc for the details.
Term::ReadLine::Perl uses perl's IO, which knows about these issues and deals with them, so you don't see this bug with it. Term::ReadLine::Gnu uses the C library, which doesn't, so you do.
You can work around this with one of two methods:
set the environment variable PERL_SIGNALS to unsafe before running the script, as in:
bash$ PERL_SIGNALS=unsafe perl readline-test.pl
Note, BEGIN { $ENV{PERL_SIGNALS} = "unsafe"; } isn't enough, it needs to be set before perl itself starts.
Use POSIX signal functions:
#~ $SIG{TERM} = sub { print "I got a TERM\n"; exit; };
use POSIX;
sigaction SIGTERM, new POSIX::SigAction sub { print "I got a TERM\n"; exit; };
Both the above seem to work in Linux; can't speak for Windows or other unices. Also, both of the above come with risks - see perlipc for the details.
I am running a program in Perl that at one point evaluates data in an if statement called from within a subroutine, e.g.
sub check_good {
if (!good) {
# exit this subroutine
# restart program
}
else {
# keep going
}
} # end sub
The problem I have is with exiting and restarting. I know that I can just use exit 0; to exit straight out, but obviously this is not correct if I want to go back to the beginning. I tried calling the subroutine which essentially starts the program, but of course once it has run it will go back to this point again.
I thought about putting it in a while loop, but this would mean putting the whole file in the loop and it would be very impractical.
I don't actually know whether this is possible, so any input would be great.
If you have not changed #ARGV, or you keep a copy of it, you could possibly do something like exec($^X, $0, #ARGV).
$^X and $0 (or $EXECUTABLE_NAME and $PROGRAM_NAME, see Brian's comment below) are the current perl interpreter and current perl script, respectively.
An alternative would be to always have two processes: A supervisor and a worker.
Refactor all your logic into a subroutine called run(or main or whatever). Whn your real logic detect that it needs to restart it should exit with a predefined non-zero exit code (like 1 for example).
Then your main script and supervisor would look like this:
if (my $worker = fork) {
# child process
run(#ARGV);
exit 0;
}
# supervisor process
waitpid $worker;
my $status = ($? >> 8);
if ($status == 1) { ... restart .. }
exit $status; # propagate exit code...
In the simple scenario where you just want to restart once, this might be a bit overkill. But if you at any point need to be able to handle other error scenarios this method might be preferable.
For example if the exit code is 255, this indicates that the main script called die(). In this case you might want to implement some decision procedure wether to restart the script, ignore the error, or escalate the issue.
There are quite a few modules on CPAN implementing such supervisors. Proc::Launcher is one of them and the manual page includes a extensive discussion of related works. (I have never used Proc::Launcher, it is mainly due to this discussion I'm linking to it)
There's nothing to stop you calling system on yourself. Something like this (clearly in need of a tidy), where I pass in a command-line argument to prevent the code calling itself forever.
#!/usr/bin/perl
use strict;
use warnings;
print "Starting...\n";
sleep 5;
if (! #ARGV) {
print "Start myself again...\n";
system("./sleep.pl secondgo");
print "...and die now\n";
exit;
} elsif ((#ARGV) && $ARGV[0] eq "secondgo") {
print "Just going to die straightaway this time\n";
exit;
}