Show bash script output in live on perl - curses script - perl

I'm a newb' on Perl, and try to do a simple script's launcher in Perl with Curses (Curses::UI)
On Stackoverflow I found a solution to print (in Perl) in real time the output of a Bash script.
But I can't do this with my Curses script, to write this output in a TextEditor field.
For example, the Perl script :
#!/usr/bin/perl -w
use strict;
use Curses::UI;
use Curses::Widgets;
use IO::Select;
my $cui = new Curses::UI( -color_support => 1 );
[...]
my $process_tracking = $container_middle_right->add(
"text", "TextEditor",
-readonly => 1,
-text => "",
);
sub launch_and_read()
{
my $s = IO::Select->new();
open my $fh, '-|', './test.sh';
$s->add($fh);
while (my #readers = $s->can_read()) {
for my $fh (#readers) {
if (eof $fh) {
$s->remove($fh);
next;
}
my $l = <$fh>;
$process_tracking->text( $l );
my $actual_text = $process_tracking->text() . "\n";
my $new_text = $actual_text . $l;
$process_tracking->text( $new_text );
$process_tracking->cursor_to_end();
}
}
}
[...]
$cui->mainloop();
This script contains a button to launch launch_and_read().
And the test.sh :
#!/bin/bash
for i in $( seq 1 5 )
do
sleep 1
echo "from $$ : $( date )"
done
The result is my application freeze while the bash script is executed, and the final output is wrote on my TextEditor field at the end.
Is there a solution to show in real time what's happened in the Shell script, without blocking the Perl script ?
Many thanks, and sorry if this question seems to be stupid :x

You can't block. Curses's loop needs to run to process events. So you must poll. select with a timeout of zero can be used to poll.
my $sel;
sub launch_child {
$sel = IO::Select->new();
open my $fh, '-|', './test.sh';
$sel->add($fh);
}
sub read_from_child {
if (my #readers = $sel->can_read(0)) {
for my $fh (#readers) {
my $rv = sysread($fh, my $buf, 64*1024);
if (!$rv) {
$sel->remove($fh);
close($fh);
next;
}
... add contents of $buf to the ui here ...
}
}
}
launch_child();
$cui->set_timer(read_from_child => \&read_from_child, 1);
$cui->mainloop();
Untested.
Note that I switched from readline (<>) to sysread since the former blocks until a newline is received. Using blocking calls like read or readline defies the point of using select. Furthermore, using buffering calls like read or readline can cause select to say nothing is waiting when there actually is. Never use read and readline with select.

Related

How to pass Arguments when call require_ok '*.pl' to test by Test::More

I'm wondering the way to test each Subroutings in *.pl files individually.
But Can't use 'require' clause because some *.pl requires Arguments.
for example
use Test::More;
require "some.pl"
will always fail Test at 'require'.
because "some.pl " required a argument and end with
exit(0);
of the file.
I just want to test "Func1,usage,...whatever," every subroutings in '*.pl' individually.
some.pl is like that
my ( $cmd) = #ARGV;
if (!defined $cmd ) {
usage();
} else {
&Func1;
}
exit(0);
sub Func1 {
print "hello";
}
sub usage {
print "Usage:\n",
}
How can I write a test code for "sub Func1" by "Test::More"?
Any suggestions appreciate.
To exercise a standalone script that you expect to exit, run it with system. Capture the output and inspect it at the end of the system call.
use Test::More;
my $c = system("$^X some.pl arg1 arg2 > file1 2> file2");
ok($c == 0, 'program exited with successful exit code');
open my $fh, "<", "file1";
my $data1 = do { local $/; <$fh> };
close $fh;
open $fh, "<", "file2";
my $data2 = do { local $/; <$fh> };
close $fh;
ok( $data1 =~ /Funct1 output/, "program called Funct1");
ok( $data2 !~ /This is how you use the program, you moron/,
"usage message not printed to STDERR" );
unlink("file1","file2");

Perl non-blocking user input

#!/usr/bin/perl -w
use Term::ReadKey;
ReadMode('cbreak');
while (1) {
$char = ReadKey(-1);
next unless defined $char;
printf("Char: $char Decimal: %d\tHex: %x\n", ord($char), ord($char));
}
ReadMode('normal');
The above works great. But i want to be able to get user input while some executable is running. so i ve tried the below but its not working. maybe running an executable while trying to get a user input is messing up? if so, how do i go about doing it?
I am getting output from $myexe and depending on the user input, i would like to filter differnt things from $myexe
#!/usr/bin/perl -w
use Term::ReadKey;
my $myexe = 'bin/myexecutable';
open my $EXE,
"$myexe distribute 2>&1 |"
or die 'Cannot open EXE';
ReadMode('cbreak');
while (<$EXE>) {
$char = ReadKey(-1);
if (defined $char) {
print ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> $char\n"; #i would press a key but nothin prints out
}
print "$_\n";
}
ReadMode('normal');
I'm wary of running a 'busy-wait' loop like you'd get with Term::ReadKey. But what I'd suggest - if you're trying to do two things at once - is that it may be worth considering doing a spot of parallel code.
Something like:
#!/usr/bin/perl
use strict;
use warnings;
use threads;
use threads::shared;
use Term::ReadKey;
my $myexe = 'bin/myexecutable';
my $filter : shared;
sub worker {
open my $EXE, "$myexe distribute 2>&1 |"
or die 'Cannot open EXE';
while ( my $line = <$EXE> ) {
#do something with filter here;
print "$filter : $line";
}
}
$filter = 0;
threads->create( \&worker );
my $keypress;
ReadMode 4;
while ( threads->list(threads::running) ) {
while ( not defined( $keypress = ReadKey(-1) )
and threads->list(threads::running) )
{
print "Waiting\nRunning:" . threads->list(threads::running) . "\n";
sleep 1;
}
print "Got $keypress\n";
$filter = $keypress;
}
ReadMode 0;
foreach my $thr ( threads->list ) {
$thr->join();
}
This is some fairly simple example code - you can extend it in a variety of ways, but the principle is this:
you start a thread to 'do the work'.
you handle the 'keypress watching' in the 'main' thread.
Because there's a sleep in there, you're not busy-waiting on a keypress (e.g. polling as fast as a processor will spin).

Perl (tk): how to run asynchronously a system command, being able to react to it's output?

I'm writing a wrapper to an external command ("sox", if this can help) with Perl "Tk".
I need to run it asynchronously, of course, to avoid blocking tk's MainLoop().
But, I need to read it's output to notify user about command's progress.
I am testing a solution like this one, using IPC::Open3:
{
$| = 1;
$pid = open3(gensym, ">&STDERR", \*FH, $cmd) or error("Errore running command \"$cmd\"");
}
while (defined($ch = FH->getc)) {
notifyUser($ch) if ($ch =~ /$re/);
}
waitpid $pid, 0;
$retval = $? >> 8;
POSIX::close($_) for 3 .. 1024; # close all open handles (arbitrary upper bound)
But of course the while loop blocks MainLoop until $cmd does terminate.
Is there some way to read output handle asynchronously?
Or should I go with standard fork stuff?
The solution should work under win32, too.
For non-blocking read of a filehandle, take a look at Tk::fileevent.
Here's an example script how one can use a pipe, a forked process, and fileevent together:
use strict;
use IO::Pipe;
use Tk;
my $pipe = IO::Pipe->new;
if (!fork) { # Child XXX check for failed forks missing
$pipe->writer;
$pipe->autoflush(1);
for (1..10) {
print $pipe "something $_\n";
select undef, undef, undef, 0.2;
}
exit;
}
$pipe->reader;
my $mw = tkinit;
my $text;
$mw->Label(-textvariable => \$text)->pack;
$mw->Button(-text => "Button", -command => sub { warn "Still working!" })->pack;
$mw->fileevent($pipe, 'readable', sub {
if ($pipe->eof) {
warn "EOF reached, closing pipe...";
$mw->fileevent($pipe, 'readable', '');
return;
}
warn "pipe is readable...\n";
chomp(my $line = <$pipe>);
$text = $line;
});
MainLoop;
Forking may or may not work under Windows. Also one needs to be cautious when forking within Tk; you must make sure that only one of the two processes is doing X11/GUI stuff, otherwise bad things will happen (X11 errors, crashes...). A good approach is to fork before creating the Tk MainWindow.

Using the Inkscape shell from perl

Inkscape has a shell mode invoked like this
inkscape --shell
where you can execute commands like this:
some_svg_file.svg -e some_png_output.png -y 1.0 -b #ffffff -D -d 150
which will generate a PNG file, or like this:
/home/simone/some_text.svg -S
which gives you the bounding box of all elements in the file in the return message like this
svg2,0.72,-12.834,122.67281,12.942
layer1,0.72,-12.834,122.67281,12.942
text2985,0.72,-12.834,122.67281,12.942
tspan2987,0.72,-12.834,122.67281,12.942
The benefit of this is that you can perform operations on SVG files without having to restart Inkscape every time.
I would like to do something like this:
sub do_inkscape {
my ($file, $commands) = #_;
# capture output
return $output
}
Things work OK if I use open2 and forking like this:
use IPC::Open2;
$pid = open2(\*CHLD_OUT, \*CHLD_IN, 'inkscape --shell');
$\ = "\n"; $/ = ">";
my $out; open my $fh, '>', \$out;
if (!defined($kidpid = fork())) {
die "cannot fork: $!";
} elsif ($kidpid == 0) {
while (<>) { print CHLD_IN $_; }
} else {
while (<CHLD_OUT>) { chop; s/\s*$//gmi; print "\"$_\""; }
waitpid($kidpid, 0);
}
but I can't find out how to input only one line, and capture only that output without having to restart Inkscape every time.
Thanks
Simone
You don't need to fork, open2 handles that by itself. What you need to do is find a way of detecting when inkscape is waiting for input.
Here's a very basic example of how you could achieve that:
#! /usr/bin/perl
use strict;
use warnings;
use IPC::Open2;
sub read_until_prompt($) {
my ($fh) = (#_);
my $done = 0;
while (!$done) {
my $in;
read($fh, $in, 1);
if ($in eq '>') {
$done = 1;
} else {
print $in;
}
}
}
my ($is_in, $is_out);
my $pid = open2($is_out, $is_in, 'inkscape --shell');
read_until_prompt($is_out);
print "ready\n";
print $is_in "test.svg -S\n";
read_until_prompt($is_out);
print $is_in "quit\n";
waitpid $pid, 0;
print "done!\n";
The read_until_prompt reads from inkscapes output until it finds a > character, and assumes that when it sees one, inkscape is ready.
Note: This is too simple, you will probably need more logic in there to make it work more reliably if a > can appear outside the prompt in the output you're expecting. There is also no error checking at all in the above script, which is bad.

How do I make two perl files communicate?

So I have something like this:
fork.pl
for $str (#files)
{
my($command) = "perl command.pl ".$str;
exec( $command );
}
command.pl
$file=$ARGV[0].".csv";
#code that counts rows here
print $rowcount;
So as the end result I have 10 files launched which count how many rows are in each csv file.
I do not need help editting this code, it works (this is just a compressed version). I need help figuring out how to take the output ($rowcount) of ten files and combine it into one for further processing.
I keep some utility code around for just this purpose... this is tweaked slightly to your question and including a synchronized global counting method.
#!/usr/bin/perl
use threads;
use Thread::Queue;
my #workers;
my $num_threads = 10;
my $queue = new Thread::Queue;
my $total_ines = 0;
for (0..$num_threads-1) {
$workers[$_] = new threads(\&worker);
}
while ($_ = shift #ARGV) {
$queue->enqueue($_);
}
sub worker() {
while ($file = $queue->dequeue) {
#line counting code here
global_counter($lines_counted);
}
}
sub global_counter() :locked {
#add to the number of lines counted
$total_lines += shift
}
for (0..$num_threads-1) { $queue->enqueue(undef); }
for (0..$num_threads-1) { $workers[$_]->join; }
print $total_lines;
This kind of communication is solved using pipes (let me write a simple example):
# -- fork.pl -------------------------
for (1..3) {
open my $PIPE, "perl command.pl |";
print "catch: $_\n" while(<$PIPE>);
close $PIPE;
}
# -- command.pl ----------------------
print rand(1);
It prints (random numbers):
catch: 0.58929443359375
catch: 0.1290283203125
catch: 0.907012939453125
You need to look either at threads or Interprocess communication with e.g. sockets or shared memory when using fork.
Compressed but won't work. I'm assuming that in fork.pl, you fork before exec'ing? Backticks capture the output of the called process, namely your prints:
fork.pl
for $str (#files)
{
my($command) = "perl command.pl ".$str;
print `$command`;
}
But rather than forking and launching processes, wouldn't it be smarter to turn the second file into a module?
package MyCommand;
use Exporter;
our #EXPORT = qw( command );
sub command {
my $file = $_[0] . '.csv';
...
return $rowcount;
}
1;
fork.pl:
use MyCommand;
...
my #rowcounts;
for my $str (#files) {
push #rowcounts, command($str);
}
A bit of self-promotion, but I just posted this in your other thread, which seems relevant enough: How to run in parallel two child command from a parent one?
Accumulate pipes from children:
#!/usr/bin/perl -w
use strict;
my $files = qw/one.csv two.csv three.csv/;
my $command = "perl command.pl";
my #pipes;
foreach (#files) {
my $fd;
open $fd, "-|", "$command $_" and push #pipes, $fd;
};
my $sum = 0;
foreach my $pp (#pipes) {
$sum += $_ if defined ($_=<$pp>);
};
print $sum;
Then you can just read them one by one (as in example), or use IO::Select to read data as it appears in each pipe.
A hash table in addition to array is also good if you want to know which data comes from which source.