I use IPC::Run to get output from an external executable in a cron run script. I need it to be able to filter and make decisions based on the output on the fly. But the problem is it gives me output not on the fly, but in few batches - many lines at once, only after the executable has been run for a while. Is it possible to somehow flush the output like we can on the grep command with grep --line-buffered? I do not see this properly answered in all the Perl sites. Here is my script part:
use IPC::Run qw( start pump finish );
...
my $externalExecRun = start \#executableAndParameters, \undef, \$executableStdout, \$executableStderr ;
while (42) {
pump $externalExecRun;
if ($executableStdout eq '' and $engineStderr eq '') {last;}
WriteToLog("\$executableStdout: -->$executableStdout<--"); #This writes many lines at once
WriteToLog("\$executableStderr: -->$executableStderr<--");
$executableStdout = "";
$executableStderr = "";
}
finish $externalExecRun;
You can use IPC::Run's new_chunker to have it give you output on a line-by-line basis:
use warnings;
use strict;
use IPC::Run qw/ start new_chunker /;
use Data::Dump;
my $run = start ['perl','-le','print "a" x $_ for 1..180'],
'>', new_chunker, \my $out, '2>', new_chunker, \my $err;
while (1) {
$run->pump;
last unless defined $out || defined $err;
dd $out, $err;
($out,$err) = ();
}
$run->finish;
It's still possible that the external program won't output on a line-by-line basis, in which case, at least on *NIX, changing the first '>' into '>pty>' (as suggested by #ikegami in the comments) will hopefully help; or one of the links provided by #daxim.
Related
I want to write:
... | my_filter | myperlprogram
But I do not know how to run my_filter until I have started myperlprogram.
Can I somehow in myperlprogram loop STDIN through my_filter before reading it?
I am thinking something like:
pipe($a,$b);
if(not fork()) {
close STDOUT;
open STDOUT, $b;
exec "my_filter --with the correct --options";
} else {
close STDIN
open STDIN, $a
}
# continue reading STDIN now looped through `my_filter`
It's not at all clear from the description why a simple
open STDIN, '-|', 'your_filter', '--option1', ...
will not do.
The way I see the problem is: To filter the STDIN for the script, by using an external program which is run from inside the script once the script is running (so, not with a pipeline). With IPC::Run
use warnings;
use strict;
use feature 'say';
use IPC::Run qw(start pump finish);
my $filtered_in;
FILTER_IN: {
my #cmd = qw(a_filter.pl); # add filter's options/arguments
my $h = start \#cmd, \my $in, \$filtered_in;
while (<>) {
$in = $_;
pump $h while length $in;
# Wait for filter's output -- IF WANT to process lines as received
pump $h until $filtered_in =~ /\n\z/;
chomp $filtered_in; # process/use filter's output
$filtered_in .= '|'; # as it's coming (if needed)
}
finish $h or die "Cleanup returned: $?";
};
say $filtered_in // 'no input';
This allows one to process filter's lines of output as they are emitted. If that is not needed but we only want to accumulate filter's output for later then you don't need the code under # Wait for...
Simplest test with a_filter.pl such as
use warnings;
use strict;
STDOUT->autoflush(1);
my $cnt = 0;
while (<>) { print "line ", ++$cnt, ": ", $_ }
and then run
echo "a\nfew\nlines" | script.pl
with output
line 1: a|line 2: few|line 3: lines|
from our toy processing in script.pl above.
This will filter input via a file as well,
script.pl < input.txt
I have a Perl script, which runs an external executable. That executable runs for a while (sometimes seconds, sometimes an hour), can spit out text to both STDOUT and STDERR as well as an exit code, which all are needed. Following code demonstrates first successful external executable run (small bash script with one line - the comment), then with bad exit status (example with gs - ghostscript).
I want the external executable give its STDOUT to the Perl script for evaluation, filtering, formatting etc. before it gets logged to a logfile (used for other stuff as well) while the external is still executing. STDERR would also be great to be worked on same way.
This script is in stand to log everything from STDOUT, but only after the executable has finished. And the STDERR is logged only directly, without evaluations etc. I have no possibility to install any additional Perl parts, modules etc.
How do I get my Perl script to get each line (STDOUT + STDERR) from the executable while it is spitting it out (not just at the end) as well as its exit code for other purposes?
#!/usr/bin/perl
#array_executable_and_parameters = "/home/username/perl/myexecutable.sh" ; #ls -lh ; for i in {1..5}; do echo X; sleep 1; done
#array_executable_and_parameters2= "gs aaa" ;
my $line;
chdir("/home/username/perl/");
$logFileName = "logfileforsomespecificinput.log";
open(LOGHANDLE, ">>$logFileName" );
open (STDERR, '>>', $logFileName); #Prints to logfile directly
#open (STDERR, '>>', <STDOUT>); #Prints to own STDOUT (screen or mailfile)
print LOGHANDLE "--------------OK run\n";
open CMD, '-|', #array_executable_and_parameters or die $#;
while (defined($line = <CMD>)) { #Logs all at once at end
print LOGHANDLE "-----\$line=$line-----\n";
}
close CMD;
$returnCode1= $?>>8;
print LOGHANDLE "\$returnCode1=$returnCode1\n";
print LOGHANDLE "--------------BAD run\n";
open CMD2, '-|', #array_executable_and_parameters2 or die $#;
while (defined($line = <CMD2>)) {
print LOGHANDLE "-----\$line=$line-----\n";
}
close CMD2;
$returnCode2= $?>>8;
print LOGHANDLE "\$returnCode2=$returnCode2\n";
close(LOGHANDLE);
Take 2. After good advice in comments I have tried the IPC::Run. But something still does not work as expected. I seem to be missing how the looping from start (or pump?) to finish works, as well as how to get it to iterate when I do not know what the last output would be - as the examples everywhere mentions. So far I have now the following code, but it does not work line by line. It spits out listing of files in one go, then waits until the external loop is fully finished to print all the X's out. How do I tame it to the initial needs?
#! /usr/bin/perl
use IPC::Run qw( start pump finish );
#array_executable_and_parameters = ();
push(#array_executable_and_parameters,"/home/username/perl/myexecutable.sh"); #ls -lh ; for i in {1..5}; do echo X; sleep 1; done
my $h = start \#array_executable_and_parameters, \undef, \$out, \$err ;
pump $h;# while ($out or $err);
print "1A. \$out: $out\n";
print "1A. \$err: $err\n";
$out = "";
$err = "";
finish $h or die "Command returned:\n\$?=$?\n\$#=$#\nKilled by=".( $? & 0x7F )."\nExit code=".( $? >> 8 )."\n" ;
print "1B. \$out: $out\n";
print "1B. \$err: $err\n";
Look at IPC modules, especially IPC::Cmd, IPC::Run and if not satisfied then IPC::Run3. There is a lot of details you would have to cover and those modules will make your life a lot easier.
OK, have got it to work, so far. Might have some issues - not sure about environment variables, like umask or language related or the system load when push is waiting/blocking, or how to replace die with capturing of all variables for status. Nevertheless for my purpose, seems to work well. Will see how it works on a real system.
#! /usr/bin/perl
BEGIN {
push #INC, '/home/myusername/perl5/lib/perl5'; #Where the modules from Cpan are
}
use IPC::Run qw( start pump finish );
#array_executable_and_parameters = ();
push(#array_executable_and_parameters,"/home/myusername/perl/myexecutable.sh"); #ls -lh ; for i in {1..5}; do echo X; sleep 1; done
my $h = start \#array_executable_and_parameters, \undef, \$out, \$err ;
while (42) {
pump $h;# while ($out or $err);
if ($out eq '' and $err eq '') {last;}
print "1A. \$out: $out\n";
print "1A. \$err: $err\n";
$out = "";
$err = "";
}
finish $h or die "Command returned:\n\$?=$?\n\$#=$#\nKilled by=".( $? & 0x7F )."\nExit code=".( $? >> 8 )."\n" ;
print "1B. \$out: $out\n";
print "1B. \$err: $err\n";
The key was understanding how the blocking of pump works. All the manuals and help places kind of skipped over this part. So a neverending while which jumps out when pump lets go further without output was the key.
I'm trying to capture the output of a tail command to a temp file.
here is a sample of my apache access log
Here is what I have tried so far.
#!/usr/bin/perl
use strict;
use warnings;
use File::Temp ();
use File::Temp qw/ :seekable /;
chomp($tail = `tail access.log`);
my $tmp = File::Temp->new( UNLINK => 0, SUFFIX => '.dat' );
print $tmp "Some data\n";
print "Filename is $tmp\n";
I'm not sure how I can go about passing the output of $tail to this temporoy file.
Thanks
I would use a different approach for tailing the file. Have a look to File::Tail, I think it will simplify things.
It sounds like all you need is
print $tmp $tail;
But you also need to declare $tail and you probably shouldn't chomp it, so
my $tail = `tail access.log`;
Is classic Perl approach to use the proper filename for the handle?
if(open LOGFILE, 'tail /some/log/file |' and open TAIL, '>/tmp/logtail')
{
print LOGFILE "$_\n" while <TAIL>;
close TAIL and close LOGFILE
}
There is many ways to do this but since you are happy to use modules, you might as well use File::Tail;
use v5.12;
use warnings 'all';
use File::Tail;
my $lines_required = 10;
my $out_file = "output.txt";
open(my $out, '>', $out_file) or die "$out_file: $!\n";
my $tail = File::Tail->new("/some/log/file");
for (1 .. $lines_required) {
print $out $tail->read;
}
close $out;
This sits and monitors the log file until it gets the 10 new lines. If you just want a copy of the last 10 lines as is, the easiest way is to use I/O redirection from the shell: tail /some/log/file > my_copy.txt
I am very new to Perl. I wrote a script to display user name from Linux passwd file.
It displays list of user name but then it also display user ids (which I am not trying to display at the moment) and at the end it displays "List of users ids and names:" which it should display before displaying list of names.
Any idea why it is behaving like this?
#!/usr/bin/perl
#names=system("cat /etc/passwd | cut -f 1 -d :");
#ids=system("cat /etc/passwd | cut -f 3 -d :");
$length=#ids;
$i=0;
print "List of users ids and names:\n";
while ($i < $length) {
print $names[$i];
$i +=1;
}
Short answer: system doesn't return output from a command; it returns the exit value. As the output of the cut isn't redirected, it prints to the current STDOUT (e.g. your terminal). Use open or qx// quotes (aka backticks) to capture output:
#names = `cat /etc/passwd | cut -f 1 -d :`;
As you are still learning Perl, here is a write-up detailing how I'd solve that problem:
First, always use strict; use warnings; at the beginning of your script. This helps preventing and detecting many problems, which makes it an invaluable help.
Next, starting a shell when everything could be done inside Perl is inefficient (your solution starts six unneccessary processes (two sets of sh, cat, cut)). In fact, cat is useless even in the shell version; just use shell redirection operators: cut ... </etc/passwd.
To open a file in Perl, we'll do
use autodie; # automatic error handling
open my $passwd, "<", "/etc/passwd";
The "<" is the mode (here: reading). The $passwd variable now holds a file handle from which we can read lines like <$passwd>. The lines still contain a newline, so we'll chomp the variable (remove the line ending):
while (<$passwd>) { # <> operator reads into $_ by default
chomp; # defaults to $_
...
}
The split builtin takes a regex that matches separators, a string (defaults to $_ variable), and a optional limit. It returns a list of fields. To split a string with : seperator, we'll do
my #fields = split /:/;
The left hand side doesn't have to be an array, we can also supply a list of variables. This matches the list on the right, and assigns one element to each variable. If we want to skip a field, we name it undef:
my ($user, undef, $id) = split /:/;
Now we just want to print the user. We can use the print command for that:
print "$user\n";
From perl5 v10 on, we can use the say feature. This behaves exactly like print, but auto-appends a newline to the output:
say $user;
And voilĂ , we have our final script:
#!/usr/bin/perl
use strict; use warnings; use autodie; use feature 'say';
open my $passwd, "<", "/etc/passwd";
while (<$passwd>) {
chomp;
my ($user, undef, $id) = split /:/;
say $user;
}
Edit for antique perls
The autodie module was forst distributed as a core module with v10.1. Also, the feature 'say' isn't available before v10.
Therefore, we must use print instead of say and do manual error handling:
#!/usr/bin/perl
use strict; use warnings;
open my $passwd, "<", "/etc/passwd" or die "Can't open /etc/passwd: $!";
while (<$passwd>) {
chomp;
my ($user, undef, $id) = split /:/;
print "$user\n";
}
The open returns a false value when it fails. In that case, the $! variable will hold the reason for the error.
For reading of system databases you should use proper system functions:
use feature qw(say);
while (
my ($name, $passwd, $uid, $gid, $quota,
$comment, $gcos, $dir, $shell, $expire
)
= getpwent
)
{
say "$uid $name";
}
If you're scanning the entire password file, you can use getpwent():
while( my #pw = getpwent() ){
print "#pw\n";
}
See perldoc -f getpwent.
in Unix, what I want to do is "history | grep keyword", just because it takes quite some steps if i wanna grep many types of keywords, so I want it to be automation, which I write a Perl script to do everything, instead of repeating the commands by just changing the keyword, so whenever I want to see those certain commands, I will just use the Perl script to do it for me.
The keyword that I would like to 'grep' is such as source, ls, cd, etc.
It can be printed out in any format, as long as to know how to do it.
Thanks! I appreciate any comments.
modified (thanks to #chas-owens)
#!/bin/perl
my $searchString = $ARGV[0];
my $historyFile = ".bash.history";
open FILE, "<", $historyFile or die "could not open $historyFile: $!";
my #line = <FILE>;
print "Lines that matched $searchString\n";
for (#lines) {
if ($_ =~ /$searchString/) {
print "$_\n";
}
}
original
#!/bin/perl
my $searchString = $ARGV[0];
my $historyFile = "<.bash.history";
open FILE, $historyFile;
my #line = <FILE>;
print "Lines that matched $searchString\n";
for (#lines) {
if ($_ =~ /$searchString/) {
print "$_\n";
}
}
to be honest ... history | grep whatever is clean and simple and nice ; )
note code may not be perfect
because it takes quite some steps if i wanna grep many types of keywords
history | grep -E 'ls|cd|source'
-P will switch on the Perl compatible regular expression library, if you have a new enough version of grep.
This being Perl, there are many ways to do it. The simplest is probably:
#!/usr/bin/perl
use strict;
use warnings;
my $regex = shift;
print grep { /$regex/ } `cat ~/.bash_history`;
This runs the shell command cat ~/.bash_history and returns the output as a list of lines. The list of lines is then consumed by the grep function. The grep function runs the code block for every item and only returns the ones that have a true return value, so it will only return lines that match the regex.
This code has several things wrong with it (it spawns a shell to run cat, it holds the entire file in memory, $regex could contain dangerous things, etc.), but in a safe environment where speed/memory isn't an issue, it isn't all that bad.
A better script would be
#!/usr/bin/perl
use strict;
use warnings;
use constant HISTORYFILE => "$ENV{HOME}/.bash_history";
my $regex = shift;
open my $fh, "<", HISTORYFILE
or die "could not open ", HISTORYFILE, ": $!";
while (<$fh>) {
next unless /$regex/;
print;
}
This script uses a constant to make it easier to change which history file it is using at a latter date. It opens the history file directly and reads it line by line. This means the whole file is never in memory. This can be very important if the file is very large. It still has the problem that $regex might contain a harmful regex, but so long as you are the person running it, you only have yourself to blame (but I wouldn't let outside users pass arguments to a command like this through, say a web application).
I think you are better off writing a perlscript which does you fancy matching (i.e. replaces the grep) but does not read the history file. I say this because the history does not appear to be flushed to the .bash_history file until I exit the shell. Now there are probably settings and/or environment variables to control this, but I don't know what they are. So if you just write a perl script which scanns STDIN for your favourite commands you can invoke it like
history | findcommands.pl
If its less typing you are after set up a shell function or alias to do this for you.
As requested by #keifer here is a sample perl script which searches for a specified (or default set of commands in your history). Onbiously you should change the dflt_cmds to whichever ones you search for most frequently.
#!/usr/bin/perl
my #dflt_cmds = qw( cd ls echo );
my $cmds = \#ARGV;
if( !scalar(#$cmds) )
{
$cmds = \#dflt_cmds;
}
while( my $line = <STDIN> )
{
my( $num, $cmd, #args ) = split( ' ', $line );
if( grep( $cmd eq $_ , #$cmds ) )
{
print join( ' ', $cmd, #args )."\n";
}
}