Running a non-perl script into my perl script? [duplicate] - perl

This question already has answers here:
How do I get the output of an external command in Perl?
(7 answers)
Closed 8 years ago.
I am writing a Perl script to automate some software installation.
In my script I run another bash script and take its output and print it again.
print `/home/me/build.sh`;
but build.sh script take 8 minutes, so my script wait till the 8 minutes and script finishes the starting in printing the output.
How can I print each line from the build.sh program as it is running in bash shell?
As the comment below I use system ("/home/me/build.sh");
but the output goes to shell however I make out redirection in my script to my log file,
open $fh, "> filename";
*STDOUT = $fh;
*STDERR = $fh;
Then should when I use system function its output will be redirected to filename, but it isn't.
Should I use print system ("/home/me/build.sh"); instead of system ("/home/me/build.sh");?
#
The full code:
#!/usr/bin/perl
use strict;
use warnings;
use IO::File;
my %DELIVERIES = ();
my $APP_PATH = $ENV{HOME};
my $LOG_DIR = "$APP_PATH/logs";
my ($PRG_NAME) = $0 =~ /^[\/.].*\/([a-zA-Z]*.*)/;
main(#argv);
sub main
{
my #comps = components_name();
my $comp;
my $pid;
while ( scalar #comps ) {
$comp = pop #comps;
if ( ! ($pid = fork) ) {
my $filename = lc "$LOG_DIR/$comp.log";
print "$comp delpoyment started, see $filename\n";
open (my $logFile, ">", "$filename") or (die "$PRG_NAME: $!" && exit);
*STDOUT = $logFile;
*STDERR = $logFile;
deploy_component ( $comp );
exit 0;
}
}
my $res = waitpid (-1, 0);
}
sub components_name
{
my $FILENAME="$ENV{HOME}/components";
my #comps = ();
my $fh = IO::File->new($FILENAME, "r");
while (<$fh>)
{
push (#comps, $1) if /._(.*?)_.*/;
chomp ($DELIVERIES{$1} = $_);
}
return #comps;
}
sub deploy_component
{
my $comp_name = shift;
print "\t[umask]: Changing umask to 007\n";
`umask 007`;
print "\t[Deploing]: Start the build.sh command\n\n";
open (PIPE, "-|", "/build.sh");
print while(<PIPE>);
}

A more flexible way is to use pipe.
open PIPE, "/home/me/build.sh |";
open FILE, ">filename";
while (<PIPE>) {
print $_; # print to standard output
print FILE $_; # print to filename
}
close PIPE;
close FILE;
BTW, print system ("/home/me/build.sh"); will print the return value of system(), which is the exit status of your shell script, not the output wanted.

How can I print each line from the build.sh program as it is running in bash shell?
Possible Solution:
You can try the following
system ("sh /home/me/build.sh | tee fileName");
The above statement will show the output of build.sh on the console and at the same time write that output in the filename provided as the argument for tee

Related

How to re-direct the contents to a file instead of printing it to terminal

How to avoid printing the contents to terminal (As it will take more time if it is 20k lines in my case) and instead redirect it to a file in perl?
This is just a sample and not the entire code:
if ($count eq $length)
{
push(#List,$line);
print "$line\n"; #Prints line to terminal which is time consuming
}
I tried below but it did not work
if ($cnt eq $redLen)
{
push(#List,$line);
print $line > "/home/vibes/text";
}
Please let me know if my question is not clear?
Simply use the 3 argument open method.
use strict;
use warnings;
my $line = "Hello Again!";
open (my $fh, ">", "/home/vibes/text") || die "Failed to open /home/vibes/text $!";
print $fh "$line\n";
close($fh); # Always close opened files.
The default filehandle in perl is STDOUT. You can change it with a call to select:
print "Hello\n"; # goes to stdout
open my $fh, '>', '/home/vibes/text';
select($fh);
print "World\n"; # goes to file '/home/vibes/text'
From your shell, output redirection is usually a matter of appending > file to your command. This is true in both Unix-y systems and on Windows.
$ perl my_script.pl > /home/vibes/text

Monitor a log file for Mac Address using Perl and Linux::Inotify2 Module

I currently have my script here it is, my goal is to be able to monitor a live log file that is updated every second and as soon as my script finds this f8:27:93:88:1c:95 mac address it writes the line to a script.
#!/usr/bin/perl
my $mac = "f8:27:93:88:1c:95";
open (OUT, ">output.txt");
sub Reader (){
#a1 = `Tail System.log`;
}
sub Parser (){
if( $_=~ m/f8:27:93:88:1c:95/ ){
print OUT $_;
}
}
My goal is to be able to watch this log file, it is being updated every second so tail does not work well.
Here is a snippet from the log file
> [2014-07-18 14:11:22,849] <inform_stat-1> WARN event - [event] User[f8:27:93:0c:da:c5] roams from AP[dc:9f:db:1a:61:bd] to AP[dc:9f:db:1a:61:b9] on "channel 44(na)"
Perhaps use a cpan module like File::Tail
#!/usr/bin/perl
use strict;
use warnings;
use autodie;
use File::Tail;
my $infile = 'System.log';
my $outfile = 'output.txt';
my $mac = 'f8:27:93:88:1c:95';
open my $outfh, '>', $outfile;
my $tail = File::Tail->new($infile);
while (defined(my $line = $tail->read)) {
print $outfh $line if $line =~ m/\Q$mac/;
}
You have already mentioned that the log changes every second. So inotify will not help much in your case. So I recommend to run your perl script as daemon so that it can constantly analyze your log file and output the result to a text file. To avoid load you should use seek and tell so that whole file need not need to load into the server. The below code will work for you.
#!/usr/bin/perl
use POSIX qw(setsid);
use LWP::Simple;
$| = 1;
# daemonize the program
&daemonize;
while(1)
{
open (DATA,"</var/log/log");
open (OUT, ">output.txt");
my $position = 0;
$position = `cat /tmp/position` if -e "/tmp/position";
seek (DATA,$position,0);
while (<DATA>)
{
if( $_=~ m/f8:27:93:88:1c:95/ ){
print OUT $_;
}
}
$position = tell(DATA);
open (DATA1,">/tmp/position");
print DATA1 $position;
close(DATA);
close(DATA1);
close(OUT);
}
sub daemonize {
chdir '/' or die "Can’t chdir to /: $!";
open STDIN, '/dev/null' or die "Can’t read /dev/null: $!";
open STDOUT, '>>/dev/null' or die "Can’t write to /dev/null: $!";
open STDERR, '>>/dev/null' or die "Can’t write to /dev/null: $!";
defined(my $pid = fork) or die "Can’t fork: $!";
exit if $pid;
setsid or die "Can’t start a new session: $!";
umask 0;
}

How to read from a file and direct output to a file if a file name is given in the command line, and printing to console if no argument given

I made a file, "rootfile", that contains paths to certain files and the perl program mymd5.perl gets the md5sum for each file and prints it in a certain order. How do I redirect the output to a file if a name is given in the command line? For instance if I do
perl mymd5.perl md5file
then it will feed output to md5file. And if I just do
perl mydm5.perl
it will just print to the console.
This is my rootfile:
/usr/local/courses/cs3423/assign8/cmdscan.c
/usr/local/courses/cs3423/assign8/driver.c
/usr/local/courses/cs3423/assign1/xpostitplus-2.3-3.diff.gz
This is my program right now:
open($in, "rootfile") or die "Can't open rootfile: $!";
$flag = 0;
if ($ARGV[0]){
open($out,$ARGV[0]) or die "Can't open $ARGV[0]: $!";
$flag = 1;
}
if ($flag == 1) {
select $out;
}
while ($line = <$in>) {
$md5line = `md5sum $line`;
#md5arr = split(" ",$md5line);
if ($flag == 0) {
printf("%s\t%s\n",$md5arr[1],$md5arr[0]);
}
}
close($out);
If you don't give a FILEHANDLE to print or printf, the output will go to STDOUT (the console).
There are several way you can redirect the output of your print statements.
select $out; #everything you print after this line will go the file specified by the filehandle $out.
... #your print statements come here.
close $out; #close connection when done to avoid counfusing the rest of the program.
#or you can use the filehandle right after the print statement as in:
print $out "Hello World!\n";
You can print a filename influenced by the value in #ARGV as follows:
This will take the name of the file in $ARGV[0] and use it to name a new file, edit.$ARGV[0]
#!/usr/bin/perl
use warnings;
use strict;
my $file = $ARGV[0];
open my $input, '<', $file or die $!;
my $editedfile = "edit.$file";
open my $name_change, '>', $editedfile or die $!;
if ($input eq "md5file"){
while ($in){
# Do something...
print $name_change "$_\n";
}
}
Perhaps the following will be helpful:
use strict;
use warnings;
while (<>) {
my $md5line = `md5sum $_`;
my #md5arr = split( " ", $md5line );
printf( "%s\t%s\n", $md5arr[1], $md5arr[0] );
}
Usage: perl mydm5.pl rootfile [>md5file]
The last, optional parameter will direct output to the file md5file; if absent, the results are printed to the console.

Do we have an autochomp in Perl?

This is what my Perl code looks like for monitoring a Unix folder :
#!/usr/bin/perl
use strict;
use warnings;
use File::Spec::Functions;
my $date = `date`; chomp $date;
my $datef = `date +%Y%m%d%H%M.%S`; chomp $datef;
my $pwd = `pwd`; chomp $pwd;
my $cache = catfile($pwd, "cache");
my $monitor = catfile($pwd, "monme");
my $subject = '...';
my $msg = "...";
my $sendto = '...';
my $owner = '...';
sub touchandmail {
`touch $cache -t "$datef"`;
`echo "$msg" | mail -s "$subject" $owner -c $sendto`;
}
while(1) {
$date = `date`; chomp $date;
$datef = `date +%Y%m%d%H%M.%S`; chomp $datef;
if (! -e "$cache") {
touchandmail();
} elsif ("`find $monitor -newer $cache`" ne "") {
touchandmail();
}
sleep 300;
}
To do a chomp after every assignment does not look good. Is there some way to do an "autochomp"?
I am new to Perl and might not have written this code in the best way. Any suggestions for improving the code are welcome.
Don't use the shell, then.
#! /usr/bin/perl
use warnings;
use strict;
use Cwd;
use POSIX qw/ strftime /;
my $date = localtime;
my $datef = strftime "%Y%m%d%H%M.%S", localtime;
my $pwd = getcwd;
The result is slightly different: the output of the date command contains a timezone, but the value of $date above will not. If this is a problem, follow the excellent suggestion by Chas. Owens below and use strftime to get the format you want.
Your sub
sub touchandmail {
`touch $cache -t "$datef"`;
`echo "$msg" | mail -s "$subject" $owner -c $sendto`;
}
will fail silently if something goes wrong. Silent failures are nasty. Better would be code along the lines of
sub touchandmail {
system("touch", "-t", $datef, $cache) == 0
or die "$0: touch exited " . ($? >> 8);
open my $fh, "|-", "mail", "-s", $subject, $owner, "-c", $sendto
or die "$0: could not start mail: $!";
print $fh $msg
or warn "$0: print: $!";
unless (close $fh) {
if ($! == 0) {
die "$0: mail exited " . ($? >> 8);
}
else {
die "$0: close: $!";
}
}
}
Using system rather than backticks is more expressive of your intent because backticks are for capturing output. The system(LIST) form bypasses the shell and having to worry about quoting arguments.
Getting the effect of the shell pipeline echo ... | mail ... without the shell means we have to do a bit of the plumbing work ourselves, but the benefit—as with system(LIST)—is not having to worry about shell quoting. The code above uses many-argument open:
For three or more arguments if MODE is '|-', the filename is interpreted as a command to which output is to be piped, and if MODE is '-|', the filename is interpreted as a command that pipes output to us. In the two-argument (and one-argument) form, one should replace dash ('-') with the command. See Using open for IPC in perlipc for more examples of this.
The open above forks a mail process, and $fh is connected to its standard input. The parent process (the code still running touchandmail) performs the role of echo with print $fh $msg. Calling close flushes the handle's I/O buffers plus a little extra because of how we opened it:
If the filehandle came from a piped open, close returns false if one of the other syscalls involved fails or if its program exits with non-zero status. If the only problem was that the program exited non-zero, $! will be set to 0. Closing a pipe also waits for the process executing on the pipe to exit—in case you wish to look at the output of the pipe afterwards—and implicitly puts the exit status value of that command into $? and ${^CHILD_ERROR_NATIVE}.
More generally, the IO::All module does indeed provide the equivalent of an autochomp:
use IO::All;
# for getting command output:
my #date = io("date|")->chomp->slurp;
#$date[0] contains the chomped first line of the output
or more generally:
my $fh = io("file")->chomp->tie;
while (<$fh>) {
# no need to chomp here ! $_ is pre-chomped
}
Granted, for this particular case of date I would agree with the other answerers that you are probably better off using one of the DateTime modules, but if you are simply reading in a file and want all your lines to be chomped, then IO::All with the chomp and tie options applied is very convenient.
Note also that the chomp trick doesn't work when slurping the entire contents of the handle into a scalar directly (that's just the way it is implemented).
Try putting it into a function:
sub autochomp {
my $command = shift;
my $retval = `$command`;
chomp $retval;
return $retval;
}
And then call that for each command you want to execute and then chomp.
Use DateTime or other of the date modules on CPAN instead of the date utility.
For example:
use DateTime;
my $dt = DateTime->now;
print $dt->strftime('%Y%m%d%H%M.%S');
It is possible to assign and chomp in a single line using the following syntax:
chomp ( my $date = `date` );
As for speaking more Perlishly, if you find yourself repeating the same thing over and over again, roll it into a sub:
sub assign_and_chomp {
my #result;
foreach my $cmd (#_) {
chomp ( my $chomped = $cmd );
push #result, $chomped;
}
return #result;
}
my ( $date , $datef , $pwd )
= assign_and_chomp ( `date` , `date +%Y%m%d%H%M.%S` , `pwd` );

Programmatically read from STDIN or input file in Perl

What is the slickest way to programatically read from stdin or an input file (if provided) in Perl?
while (<>) {
print;
}
will read either from a file specified on the command line or from stdin if no file is given
If you are required this loop construction in command line, then you may use -n option:
$ perl -ne 'print;'
Here you just put code between {} from first example into '' in second
This provides a named variable to work with:
foreach my $line ( <STDIN> ) {
chomp( $line );
print "$line\n";
}
To read a file, pipe it in like this:
program.pl < inputfile
The "slickest" way in certain situations is to take advantage of the -n switch. It implicitly wraps your code with a while(<>) loop and handles the input flexibly.
In slickestWay.pl:
#!/usr/bin/perl -n
BEGIN: {
# do something once here
}
# implement logic for a single line of input
print $result;
At the command line:
chmod +x slickestWay.pl
Now, depending on your input do one of the following:
Wait for user input
./slickestWay.pl
Read from file(s) named in arguments (no redirection required)
./slickestWay.pl input.txt
./slickestWay.pl input.txt moreInput.txt
Use a pipe
someOtherScript | ./slickestWay.pl
The BEGIN block is necessary if you need to initialize some kind of object-oriented interface, such as Text::CSV or some such, which you can add to the shebang with -M.
-l and -p are also your friends.
You need to use <> operator:
while (<>) {
print $_; # or simply "print;"
}
Which can be compacted to:
print while (<>);
Arbitrary file:
open my $F, "<file.txt" or die $!;
while (<$F>) {
print $_;
}
close $F;
If there is a reason you can't use the simple solution provided by ennuikiller above, then you will have to use Typeglobs to manipulate file handles. This is way more work. This example copies from the file in $ARGV[0] to that in $ARGV[1]. It defaults to STDIN and STDOUT respectively if files are not specified.
use English;
my $in;
my $out;
if ($#ARGV >= 0){
unless (open($in, "<", $ARGV[0])){
die "could not open $ARGV[0] for reading.";
}
}
else {
$in = *STDIN;
}
if ($#ARGV >= 1){
unless (open($out, ">", $ARGV[1])){
die "could not open $ARGV[1] for writing.";
}
}
else {
$out = *STDOUT;
}
while ($_ = <$in>){
$out->print($_);
}
Do
$userinput = <STDIN>; #read stdin and put it in $userinput
chomp ($userinput); #cut the return / line feed character
if you want to read just one line
Here is how I made a script that could take either command line inputs or have a text file redirected.
if ($#ARGV < 1) {
#ARGV = ();
#ARGV = <>;
chomp(#ARGV);
}
This will reassign the contents of the file to #ARGV, from there you just process #ARGV as if someone was including command line options.
WARNING
If no file is redirected, the program will sit their idle because it is waiting for input from STDIN.
I have not figured out a way to detect if a file is being redirected in yet to eliminate the STDIN issue.
if(my $file = shift) { # if file is specified, read from that
open(my $fh, '<', $file) or die($!);
while(my $line = <$fh>) {
print $line;
}
}
else { # otherwise, read from STDIN
print while(<>);
}