I want to display an iterative progress bar during the execution of a particular command in my Perl-CGI program. I use the CGI::ProgressBar module to achieve this. For example, if I want to show the progress bar during the execution of an RKHunter scan, this is the code I wrote:
use CGI::ProgressBar qw/:standard/;
$| = 1;
print progress_bar( -from=>1, -to=>100 );
open(my $quik_rk, '-|', 'rkhunter', '--enable', '"known_rkts"') or print "ERROR RUNNING BASIC ROOTKIT CHECK!!";
# print the progress bar while the command executes
while(<$quik_rk>)
{
print update_progress_bar;
#print "<img src=\"ajax-loader.gif\"></img>";
}
close($quik_rk);
This works fine. However, I try the same on another command(this one's to scan using Linux Maldet) immediate after the code above:
open(my $quik_lmd, '-|', 'maldet', '-a', '/home?/?/public_html') or print "ERROR RUNNING BASIC MALWARE CHECK!!";
my $this_ctr = 0;
while(<$quik_lmd>)
{ $this_ctr++;
print update_progress_bar;
}
close($quik_lmd);
The progress bar doesn't execute but te command itself runs in the background.
What am I doing wrong?
Is there a better way to show a progress bar on a browser in Perl-CGI?
I am not familiar with RKHunter, but based on your results my guess is that it outputs a line of text for each test it runs, while the other command does not.
Each line of text output by RKHunter will trigger the next iteration of <$quik_rk>.
The second command, <$quik_lmd>, it is likely silent, so it never triggers the loop. Once the command terminates, execution continues after your while.
The key bit here is "line of text". The <$filehandle> operator returns a line of text each time it sees a newline character. In order to do what you want using this construct, you would need to coerce the second command into being verbose about it's activities, and most importantly, to be verbose with a lot of newlines.
Alternatively, you can open a background process and use sleep to manage your loop, e.g.,
use strict;
use POSIX qw(WNOHANG);
my $pid = open(my $quik_rk, '-|', 'sleep', '5'); # replace with your command
do {
print "waiting\n"; # update_progress_bar;
sleep 1; # seconds between update
} while (waitpid($pid, WNOHANG)==0);
Related
How I can echo a progress bar while an external process is executing and capture its STDOUT when it's done, using only standard modules. And not using fork?
Run external process, something like: #array = `ls -l`;
While it executing, do printing progress bar, like: print '.';
Capture STDOUT of the process into array, when it done
Continue works main script
I'm reading about IPC::Open2, IPC::Open3, but I don't understand how to use them for this task. Maybe it's not the right direction?
What do you have so far? If you have having trouble with the interprocess communication, forget about the progress bar for the moment and ask just about that.
You can't really have a progress bar for something that has an indeterminate end. If you don't know how much input you will read, you don't know what fraction of it you have read. People tend to think of progress bars as a representation of fraction of work done, just not activity. That is, unless you use macOS and understand that "less than one minute" means "more than three hours". ;)
I tend to do something simple, where I output a dot every so often. I don't know how many dots I'll see, but I know that I'll see new ones.
$|++; # unbuffer stdout to see dots as they are output
while( <$fh> ) {
print '.' if $. % $chunk_size; # $. is the line number
print "\n[$.] " if $. % $chunk_size * $row_length;
...
}
That $fh can be anything that you want to read from, including a pipe. perlopentut has examples of reading from external processes. Those are doing a fork, though. And, the other modules will fork as well. What's the constraint that makes you think you can't use fork?
You can get more fancy with your display by using curses and other things (a carriage return is handy :), but I'm not inclined to type those out.
Perhaps OP is looking for something of next kind just to indicate that external process is running.
Define a handler for $SIG{ALRM} and set alarm 1 to run handler every second. Once process complete reset alarm 0 to turn off alarm handler.
use strict;
use warnings;
use feature 'say';
use Data::Dumper;
my $ls_l; # variable to store output of external command
$| = 1; # unbuffered output
$SIG{ALRM} = \&handler;
alarm 1; # run handler every second
say 'Running sig_alarm_sleep';
$ls_l=`./sig_alarm_sleep`;
say ' done';
alarm 0;
my #fields = qw(rwx count user group size month day time name);
my #lines = split("\n",$ls_l);
my(#array);
for( #lines ) {
my $x->#{#fields} = split(' ',$_);
push #array, $x;
}
say Dumper(\#array);
exit 0;
sub handler {
print '.';
$SIG{ALRM} = \&handler;
alarm 1;
}
Bash script sig_alarm_sleep sample
#!/usr/bin/bash
sleep 20
ls -al
I have been running some commands with the IPC::Run module and everything is fine, except that I can't access the output (STDOUT, STDERR), the process produced and were redirected into variables. Is there a way to retrieve those in the error handling?
#commands = ();
foreach my $id (1..3) {
push #commands, ["perl", "script" . $id . ".pl"];
}
foreach my $cmd (#commands) {
my $out = "";
my $err = "";
my $h = harness $cmd, \undef, \$out, \$err, timeout(12,exception => {name => 'timeout'});
eval {
run $h;
};
if ($#) {
my $err_msg = $#; # save in case another error happens
print "$out\n";
print "$err\n";
$h->kill_kill;
}
}
I don't need any input for now, I just need to execute it and get the output.
EDIT
I have been testing it with running perl scripts which look like this:
for (my $i = 0; $i < 10; $i++) {
sleep 1;
print "Hello from script 1 " . localtime() . "\n";
}
I have 3 such scripts with different times and the 3rd takes 20 seconds to complete, which is more than the 12 I have in the timer.
As noted by #ysth, the reason you do not get any output, is that the STDOUT and STDERR of the process corresponding to the command $cmd, is not line buffered, but rather block buffered. So all output is collected in a buffer which is not shown (printed) until the buffer is full or it is explicitly flushed. However, when your command times out, all the output is still in the buffer and has not yet been flushed and hence collected into the variable $out in the parent process (script).
Also note that since your $cmd script is a Perl script, this behavior is documented in perlvar:
$|
If set to nonzero, forces a flush right away and after every write
or print on the currently selected output channel. Default is 0
(regardless of whether the channel is really buffered by the system or
not; $| tells you only whether you've asked Perl explicitly to flush
after each write). STDOUT will typically be line buffered if output is
to the terminal and block buffered otherwise.
The problem (that the program is not connected to a terminal or a tty) is also noted in the documentation page for IPC::Run :
Interactive applications are usually optimized for human use. This can
help or hinder trying to interact with them through modules like
IPC::Run. Frequently, programs alter their behavior when they detect
that stdin, stdout, or stderr are not connected to a tty, assuming
that they are being run in batch mode. Whether this helps or hurts
depends on which optimizations change. And there's often no way of
telling what a program does in these areas other than trial and error
and occasionally, reading the source. This includes different versions
and implementations of the same program.
The documentation also lists a set of possible workarounds, including using pseudo terminals.
One solution for your specific case is then to explicitly make STDOUT line buffered at the beginning of your script:
STDOUT->autoflush(1); # Make STDOUT line buffered
# Alternatively use: $| = 1;
for (my $i = 0; $i < 10; $i++) {
sleep 1;
print "Hello from script 1 " . localtime() . "\n";
}
Edit:
If you cannot modify the scripts you are running for some reason, you could try connect the script to a pseudo terminal. So instead of inserting statements like STDOUT->autoflush(1) in the source code of the script, you can fool the script to believe it is connected to a terminal, and hence that it should use line buffering. For your case, we just add a >pty> argument before the \$out argument in the call to harness:
my $h = harness $cmd, \undef, '>pty>', \$out,
timeout(12, exception => {name => 'timeout'});
eval {
run $h;
};
Here is a little Perl server. It displays (1), accepts a line of input, then displays (2), etc. If you type "error" or "commit", it gives a custom message. If you type "exit", it quits. Otherwise, it just endlessly takes lines of input.
use strict;
use warnings;
$|++;
my $counter = 1;
print "($counter) ";
while (<STDIN>) {
chomp;
if ($_ eq "error") {print "Error on command #$counter\n";}
if ($_ eq "commit") {print "Committing data\n";}
if ($_ eq "exit") {print "Exiting program...\n"; exit;}
$counter++;
print "($counter) ";
}
Now, here is an Expect.pm client script to interact with the server script by typing in various lines.
use strict;
use warnings;
use Expect;
$|++;
my $exp = new Expect;
$exp->raw_pty(1);
$exp->log_file("/tmp/expect.out");
$exp->log_stdout(1);
my #commands = (
"This is the first command",
"Here is the second command",
"error",
"commit",
"This is the last command",
"exit",
);
$exp->spawn("./expecttest_server.pl");
foreach my $command (#commands) {
print "$command\n";
$exp->send("$command\n");
$exp->expect(1, '-re','\(\d+\)');
}
$exp->soft_close();
What I want is to be able to store the entire session from start to finish, including everything the server script generated, and everything the Expect.pm script sent.
That is, I want my client script to be able to return output like this, which is what you would see if you ran and interacted with the server script manually:
(1) This is the first command
(2) Here is the second command
(3) error
Error on command #3
(4) commit
Committing data
(5) This is the last command
(6) exit
Exiting program...
But the STDOUT display that comes from running the client script looks like this:
This is the first command
(1) (2) Here is the second command
error
(3) Error on command #3
(4) commit
This is the last command
Committing data
(5) (6) exit
Exiting program...
and the file specified by $exp->log_file (tmp/expect.out) shows this:
(1) (2) (3) Error on command #3
(4) Committing data
(5) (6) Exiting program...
I've tried experimenting by logging various combinations of the command itself + the before_match and after_match variables returned by $exp->expect(). But so far I haven't gotten the right combination. And it seems like an awfully clunky way to get what I'm looking for.
So, what's the best practice for capturing the entirety of an Expect.pm session?
Thanks to anyone who can help!
When run on the command line, your server prints
(1)
to stdout immediately and waits for input.
However, when you create an Expect object, you are actually setting up a PTY (a pseudo terminal). Any processes you spawn will have their stdin and stdout connected to this PTY, not to the TTY that your shell is connected to. This means that it's up to your Expect object whether output from the spawned process is displayed or not; it will not be displayed automatically.
When you spawn a process, the Expect object holds onto any output in an input buffer. When you send a string to the process, any additional output that is generated will be read into the buffer. If the PTY has echoing enabled (the default), the string you send will be echoed back, but the contents of the Expect object's buffer will not.
When you call the expect method, Expect waits until a matching string appears in the input buffer. If a match is found before the timeout expires, expect returns and prints the matching string.
So, all you need to do is call expect before sending your first command, something like this:
Server
use strict;
use warnings;
$| = 1;
my $counter = 1;
do {
print "($counter) ";
$counter++;
} while (<>);
Client
use strict;
use warnings;
use Expect;
$| = 1;
my $exp = Expect->new;
my $server = './expect_server';
$exp->spawn($server);
my #commands = qw(foo bar baz);
foreach my $command (#commands) {
$exp->expect(1, '-re', '\(\d+\)');
$exp->send("$command\r");
}
$exp->soft_close;
Output
(1) foo
(2) bar
(3) baz
(4)
Note that this is the exact same process you would use when interacting with a process manually: wait for the prompt, then type your command.
I am trying to write a Perl CGI which executes an RKHunter scan. While executing the comman, I would like to show something to indicate progress instead of the actual output which is to be redirected to another file. The code thus far is:
open(my $quik_rk, '-|', 'rkhunter', '--enable', '"known_rkts"') or print "ERROR RUNNING QUICK ROOTKIT CHECK!!";
while(<$quik_rk>)
{ print ".";
}
print "\n";
close($quik_rk);
This doesn't show any output and I am presented with a blank screen while waiting for execution to complete. All the dots are printed to the screen together instead of one-by-one., Moreover, when I use the following to redirect, the command doesn't execute at all:
open(my $quik_rk, '-|', 'rkhunter', '--enable', '"known_rkts"', '>>', '/path/to/file') or print "ERROR RUNNING QUICK ROOTKIT CHECK!!";
How can I fix this in such a way that the verbose output is redirected to a file and only a .... steadily progresses on the screen?
$|=1;
At the beginning of your script.
This turns autoflush on so every print actually prints instead of waiting for a newline before flushing the buffer.
Also see: http://perldoc.perl.org/perlvar.html#Variables-related-to-filehandles
I am creating a Perl/TK GUI code that will call a seperate exe inside. The progress bar widget will be the only indication that execution is happening inside it but problem is, as you run the code, the progress bar freezes because it has to finish first the execution of the seperate exe and after it's done, activity on the progress can be updated.
Is there a better way to have a simultenous implementation of the progress with respect to the seperate exe so as to have the real time execution of the code?
How are you calling the external program? If you are blocking on something like system(), don't do that. Run the program in a separate process then poll to check if it is running. As long as it is running, you update your progress indicator.
Do you get output for progress from the other process? if so, you can use open() to run your subprocess.
open(MYSUBPROC, '-|', "myprocess args") or die "could not execute!";
while (<MYSUBPROC>)
{
#do something...
}
close(MYSUBPROC) || warn "subproc returned error $?";
You should also take a look at the perlipc sections on using open() for IPC, and Safe pipe opens
I don't know enough Perl/Tk to whip up a quick example.
worker.pl
#!/usr/bin/perl
use strict;
use warnings;
$| = 1;
for (0 .. 10) {
print 10 * $_, "\n";
sleep 1;
}
boss.pl
#!/usr/bin/perl
use strict;
use warnings;
$| = 1;
open my $worker_h, '-|', 'worker.pl'
or die "Cannot open pipe to worker: $!";
while ( my $status = <$worker_h> ) {
chomp $status;
my $ticks = 2 * ($status/10);
print '=' x $ticks, "\r" x $ticks;
# do other stuff;
}
close $worker_h
or die "Error closing pipe to worker: $?";