I want to have output of Windows command-line program (say, powercfg -l) written into a file which is created using Perl and then read the file line by line in a for loop and assign it to a string.
You have some good answers already. In addition, if you just want to process a command's output and don't need to send that output directly to a file, you can establish a pipe between the command and your Perl script.
use strict;
use warnings;
open(my $fh, '-|', 'powercfg -l') or die $!;
while (my $line = <$fh>) {
# Do stuff with each $line.
}
system 'powercfg', '-l';
is the recommended way. If you don't mind spawning a subshell,
system "powercfg -l";
will work, too. And if you want the results in a string:
my $str = `powercfg -l`;
my $output = qx(powercfg -l);
## You've got your output loaded into the $output variable.
## Still want to write it to a file?
open my $OUTPUT, '>', 'output.txt' or die "Couldn't open output.txt: $!\n";
print $OUTPUT $output;
close $OUTPUT
## Now you can loop through each line and
## parse the $line variable to extract the info you are looking for.
foreach my $line (split /[\r\n]+/, $output) {
## Regular expression magic to grab what you want
}
There is no need to first save the output of the command in a file:
my $output = `powercfg -l`;
See qx// in Quote-Like Operators.
However, if you do want to first save the output in a file, then you can use:
my $output_file = 'output.txt';
system "powercfg -l > $output_file";
open my $fh, '<', $output_file
or die "Cannot open '$output_file' for reading: $!";
while ( my $line = <$fh> ) {
# process lines
}
close $fh;
See perldoc -f system.
Since the OP is running powercfg, s/he are probably capturing the ouput of the external script, so s/he probably won't find this answer terribly useful. This post is primarily is written for other people who find the answers here by searching.
This answer describes several ways to start command that will run in the background without blocking further execution of your script.
Take a look at the perlport entry for system. You can use system( 1, 'command line to run'); to spawn a child process and continue your script.
This is really very handy, but there is one serious caveat that is not documented. If you start more 64 processes in one execution of the script, your attempts to run additional programs will fail.
I have verified this to be the case with Windows XP and ActivePerl 5.6 and 5.8. I have not tested this with Vista or with Stawberry Perl, or any version of 5.10.
Here's a one liner you can use to test your perl for this problem:
C:\>perl -e "for (1..100) { print qq'\n $_\n-------\n'; system( 1, 'echo worked' ), sleep 1 }
If the problem exists on your system, and you will be starting many programs, you can use the Win32::Process module to manage your application startup.
Here's an example of using Win32::Process:
use strict;
use warnings;
use Win32::Process;
if( my $pid = start_foo_bar_baz() ) {
print "Started with $pid";
}
:w
sub start_foo_bar_baz {
my $process_object; # Call to Create will populate this value.
my $command = 'C:/foo/bar/baz.exe'; # FULL PATH to executable.
my $command_line = join ' ',
'baz', # Name of executable as would appear on command line
'arg1', # other args
'arg2';
# iflags - controls whether process will inherit parent handles, console, etc.
my $inherit_flags = DETACHED_PROCESS;
# cflags - Process creation flags.
my $creation_flags = NORMAL_PRIORITY_CLASS;
# Path of process working directory
my $working_directory = 'C:/Foo/Bar';
my $ok = Win32::Process::Create(
$process_object,
$command,
$command_line,
$inherit_flags,
$creation_flags,
$working_directory,
);
my $pid;
if ( $ok ) {
$pid = $wpc->GetProcessID;
}
else {
warn "Unable to create process: "
. Win32::FormatMessage( Win32::GetLastError() )
;
return;
}
return $pid;
}
To expand on Sinan's excellent answer and to more explicitly answer your question:
NOTE: backticks `` tell Perl to execute a command and retrieve its output:
#!/usr/bin/perl -w
use strict;
my #output = `powercfg -l`;
chomp(#output); # removes newlines
my $linecounter = 0;
my $combined_line;
foreach my $line(#output){
print $linecounter++.")";
print $line."\n"; #prints line by line
$combined_line .= $line; # build a single string with all lines
# The line above is the same as writing:
# $combined_line = $combined_line.$line;
}
print "\n";
print "This is all on one line:\n";
print ">".$combined_line."<";
Your output (on my system) would be:
0)
1)Existing Power Schemes (* Active)
2)-----------------------------------
3)Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e (Balanced) *
4)Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c (High performance)
5)Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a (Power saver)
This is all on one line:
>Existing Power Schemes (* Active)-----------------------------------Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e (Balanced) *Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c (High performance)Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a (Power saver)<
Perl makes it easy!
Try using > operator to forward the output to a file, like:
powercfg -l > output.txt
And then open output.txt and process it.
Related
Si I have this line in the perl script which prints the output to the STDOUT/console
printf "Line no. $i"
What code shall I include in the program to direct this output to an output file given by user at the command line itself (as undermentioned)
Right now ,the following portion asks the user for input file:
print "enter file name";
chomp(my $file=<STDIN>);
open(DATA,$file) or die "error reading";
But I dont want to ask the user for either of input/output file.
What I want is a way in which user could give in the input as well as output file from command line while running the program.
perl input_file output_file program.pl
What code shall i just include for this.
You can use shift to read the command line arguments to your script. shift reads and removes the first element of an array. If no array is specified (and not inside a subroutine), it will implicitly read from #ARGV, which contains the list of arguments passed to your script. For example:
use strict;
use warnings;
use autodie;
# check that two arguments have been passed
die "usage: $0 input output\n" unless #ARGV == 2;
my $infile = shift;
my $outfile = shift;
# good idea to sanitise the arguments here
open my $in, "<", $infile;
open my $out, ">", $outfile;
while (<$in>) {
print $out $_;
}
close $in;
close $out;
You could call this script like perl script.pl input_file output_file and it would copy the contents of input_file to output_file.
The easiest approach here is to ignore input and output files within your program. Just read from STDIN and write to STDOUT. Let the user redirect those filehandles when calling your program.
Your program looks something like this:
#!/usr/bin/perl
use strict;
use warnings;
while (<STDIN>) {
# do something useful to the data in $_
print;
}
And you call it like this:
$ ./your_program.pl inputfile.txt > outputfile.txt
This is known as the "Unix Filter Model" and it's the most flexible way to write programs that read input and produce output.
You can use #ARGV variable ,
use strict ;
use warnings ;
if ( #ARGV != 2 )
{
print "Usage : <program.pl> <input> <output>\n" ;
exit ;
}
open my $Input,$ARGV[0] or die "error:$!\n" ;
open my $Output,">>" .$ARGV[1] or die "error:$!\n";
print $Output $_ while (<$Input> ) ;
close ($Input) ;
close ($Output) ;
Note:
You should run the program perl program.pl input_file output_file this format.
I'm learning Perl and wrote a small script to open perl files and remove the comments
# Will remove this comment
my $name = ""; # Will not remove this comment
#!/usr/bin/perl -w <- wont remove this special comment
The name of files to be edited are passed as arguments via terminal
die "You need to a give atleast one file-name as an arguement\n" unless (#ARGV);
foreach (#ARGV) {
$^I = "";
(-w && open FILE, $_) || die "Oops: $!";
/^\s*#[^!]/ || print while(<>);
close FILE;
print "Done! Please see file: $_\n";
}
Now when I ran it via Terminal:
perl removeComments file1.pl file2.pl file3.pl
I got the output:
Done! Please see file:
This script is working EXACTLY as I'm expecting but
Issue 1 : Why $_ didn't print the name of the file?
Issue 2 : Since the loop runs for 3 times, why Done! Please see file: was printed only once?
How you would write this script in as few lines as possible?
Please comment on my code as well, if you have time.
Thank you.
The while stores the lines read by the diamond operator <> into $_, so you're writing over the variable that stores the file name.
On the other hand, you open the file with open but don't actually use the handle to read; it uses the empty diamond operator instead. The empty diamond operator makes an implicit loop over files in #ARGV, removing file names as it goes, so the foreach runs only once.
To fix the second issue you could use while(<FILE>), or rewrite the loop to take advantage of the implicit loop in <> and write the entire program as:
$^I = "";
/^\s*#[^!]/ || print while(<>);
Here's a more readable approach.
#!/usr/bin/perl
# always!!
use warnings;
use strict;
use autodie;
use File::Copy;
# die with some usage message
die "usage: $0 [ files ]\n" if #ARGV < 1;
for my $filename (#ARGV) {
# create tmp file name that we are going to write to
my $new_filename = "$filename\.new";
# open $filename for reading and $new_filename for writing
open my $fh, "<", $filename;
open my $new_fh, ">", $new_filename;
# Iterate over each line in the original file: $filename,
# if our regex matches, we bail out. Otherwise we print the line to
# our temporary file.
while(my $line = <$fh>) {
next if $line =~ /^\s*#[^!]/;
print $new_fh $line;
}
close $fh;
close $new_fh;
# use File::Copy's move function to rename our files.
move($filename, "$filename\.bak");
move($new_filename, $filename);
print "Done! Please see file: $filename\n";
}
Sample output:
$ ./test.pl a.pl b.pl
Done! Please see file: a.pl
Done! Please see file: b.pl
$ cat a.pl
#!/usr/bin/perl
print "I don't do much\n"; # comments dont' belong here anyways
exit;
print "errrrrr";
$ cat a.pl.bak
#!/usr/bin/perl
# this doesn't do much
print "I don't do much\n"; # comments dont' belong here anyways
exit;
print "errrrrr";
Its not safe to use multiple loops and try to get the right $_. The while Loop is killing your $_. Try to give your files specific names inside that loop. You can do this with so:
foreach my $filename(#ARGV) {
$^I = "";
(-w && open my $FILE,'<', $filename) || die "Oops: $!";
/^\s*#[^!]/ || print while(<$FILE>);
close FILE;
print "Done! Please see file: $filename\n";
}
or that way:
foreach (#ARGV) {
my $filename = $_;
$^I = "";
(-w && open my $FILE,'<', $filename) || die "Oops: $!";
/^\s*#[^!]/ || print while(<$FILE>);
close FILE;
print "Done! Please see file: $filename\n";
}
Please never use barewords for filehandles and do use a 3-argument open.
open my $FILE, '<', $filename — good
open FILE $filename — bad
Simpler solution: Don't use $_.
When Perl was first written, it was conceived as a replacement for Awk and shell, and Perl heavily borrowed from that syntax. Perl also for readability created the special variable $_ which allowed you to use various commands without having to create variables:
while ( <INPUT> ) {
next if /foo/;
print OUTPUT;
}
The problem is that if everything is using $_, then everything will effact $_ in many unpleasant side effects.
Now, Perl is a much more sophisticated language, and has things like locally scoped variables (hint: You don't use local to create these variables -- that merely gives _package variables (aka global variables) a local value.)
Since you're learning Perl, you might as well learn Perl correctly. The problem is that there are too many books that are still based on Perl 3.x. Find a book or web page that incorporates modern practice.
In your program, $_ switches from the file name to the line in the file and back to the next file. It's what's confusing you. If you used named variables, you could distinguished between files and lines.
I've rewritten your program using more modern syntax, but your same logic:
use strict;
use warnings;
use autodie;
use feature qw(say);
if ( not $ARGV[0] ) {
die "You need to give at least one file name as an argument\n";
}
for my $file ( #ARGV ) {
# Remove suffix and copy file over
if ( $file =~ /\..+?$/ ) {
die qq(File "$file" doesn't have a suffix);
}
my ( $output_file = $file ) =~ s/\..+?$/./; #Remove suffix for output
open my $input_fh, "<", $file;
open my $output_fh, ">", $output_file;
while ( my $line = <$input_fh> ) {
print {$output_fh} $line unless /^\s*#[^!]/;
}
close $input_fh;
close $output_fh;
}
This is a bit more typing than your version of the program, but it's easier to see what's going on and maintain.
Exact error:
$ ./script.pl file.txt
Can't open file.txt: No such file or directory at ./script.pl line 17.
Use of uninitialized value in chomp at ./script.pl line 17.
Username: Password:
I'm writing a script that takes a filename from the commandline and then writes its output to it:
#!/usr/bin/perl
use warnings;
use strict;
use Term::ReadKey;
my #array;
my $user;
my $pass;
# get login info
print "Username: ";
chomp($user = <>); # line 17
print "Password: ";
ReadMode 2;
chomp($pass = <>);
ReadMode 0;
print " \n";
# ...
# connect to database, and save the info in "#array"
# ...
# save the array to a file
if (defined($ARGV[0])) {
open (MYFILE, ">".$ARGV[0]) or die "Can't open ".$ARGV[0].": $!\n";
foreach (#array) {
print MYFILE $_."\n";
}
close (MYFILE);
# otherwise, print the names to the screen
} else {
foreach (#array) {
print $_."\n";
}
}
However if I replace ARGV[0] with "file.txt" or something similar, printing to the file works fine. If I do not provide a filename, the script works fine. My hunch is that the print statement is interfering with the iostream buffer but I can't figure out how to fix it.
That is how the magic diamond operator works in Perl. If you start the script with an argument, it tries to read input from the file. If you feed it a standard input, it reads from there.
Use <STDIN>, not <>, to read from standard input if you are planning to use #ARGV.
Or, even better, read directly from terminal (if STDIN is a terminal). A quick search brought up Term::ReadKey, but I haven't tried it myself.
We have 300+ txt files, of which are basically replicates of an email, each txt file has the following format:
To: blabla#hotmail.com
Subject: blabla
From: bla1#hotmail.com
Message: Hello World!
The platform I am to the script on is Windows, and everything is local (including the Perl instance). The aim is to write a script, which crawls through each file (all located within the same directory), and print out a list of each 'unique' email address in the from field. The concept is very easy.
Can anyone point me in the right direction here? I know how to start off a Perl script, and I am able to read a single file and print all details:
#!/usr/local/bin/perl
open (MYFILE, 'emails/email_id_1.txt');
while (<MYFILE>) {
chomp;
print "$_\n";
}
close (MYFILE);
So now, I need to be able to read and print line 3 of this file, but perform this activity not just once, but for all of the files. I've looked into the File::Find module, could this be of any use?
What platform? If Linux then it's simple:
foreach $f (#ARGS) {
# Do stuff
}
and then call with:
perl mything.pl *.txt
In Windows you'll need to expand the wildcard first as cmd.exe doesn't expand wildcards (unlike Linux shells):
#ARGV = map glob, #ARGV
foreach $f (#ARGS) {
# Do stuff
}
then extracting the third line is just a simple case of reading each line in and counting when you've got to line 3 so you know to print the results.
The glob() builtin can give you a list of files in a directory:
chdir $dir or die $!;
my #files = glob('*');
You can use Tie::File to access the 3rd line of a file:
use Tie::File;
for (#files) {
tie my #lines, 'Tie::File', $_ or die $!;
print $lines[2], "\n";
}
Perl one-liner, windows-version:
perl -wE "#ARGV = glob '*.txt'; while (<>) { say $1 if /^From:\s*(.*)/ }"
It will check all the lines, but only print if it finds a valid From: tag.
Are you using a Unix-style shell? You can do this in the shell without even using Perl.
grep "^From:" ./* | sort | uniq -c"
The breakdown is as follows:
grep will grab every line that starts with "From:", and send it to...
sort, which will alpha sort those lines, then...
uniq, which will filter out dupe lines. The "-c" part will count the occurrences.
Your output would look like:
3 From: dave#example.com
5 From: foo#bar.example.com
etc...
Possible issues:
I'm not sure how complex your "From" lines will be, e.g. multiple addresses, different formats, etc.
You could enhance that grep step in a few ways, or replace it with a Perl script that has less-broad functionality than your proposed all-in-one script.
Please comment if anything isn't clear.
Here's my solution (I hope this isn't homework).
It checks all files in the current directory whose names end with ".txt", case-insensitive (e.g., it will find "foo.TXT", which is probably what you want under Windows). It also allows for possible variations in line terminators (at least CR-LF and LF), and searches for the From: prefix case-insensitively, and allows arbitrary whitespace after the :.
#!/usr/bin/perl
use strict;
use warnings;
opendir my $DIR, '.' or die "opendir .: $!\n";
my #files = grep /\.txt$/i, readdir $DIR;
closedir $DIR;
# print "Got ", scalar #files, " files\n";
my %seen = ();
foreach my $file (#files) {
open my $FILE, '<', $file or die "$file: $!\n";
while (<$FILE>) {
if (/^From:\s*(.*)\r?$/i) {
$seen{$1} = 1;
}
}
close $FILE;
}
foreach my $addr (sort keys %seen) {
print "$addr\n";
}
This is what my Perl code looks like for monitoring a Unix folder :
#!/usr/bin/perl
use strict;
use warnings;
use File::Spec::Functions;
my $date = `date`; chomp $date;
my $datef = `date +%Y%m%d%H%M.%S`; chomp $datef;
my $pwd = `pwd`; chomp $pwd;
my $cache = catfile($pwd, "cache");
my $monitor = catfile($pwd, "monme");
my $subject = '...';
my $msg = "...";
my $sendto = '...';
my $owner = '...';
sub touchandmail {
`touch $cache -t "$datef"`;
`echo "$msg" | mail -s "$subject" $owner -c $sendto`;
}
while(1) {
$date = `date`; chomp $date;
$datef = `date +%Y%m%d%H%M.%S`; chomp $datef;
if (! -e "$cache") {
touchandmail();
} elsif ("`find $monitor -newer $cache`" ne "") {
touchandmail();
}
sleep 300;
}
To do a chomp after every assignment does not look good. Is there some way to do an "autochomp"?
I am new to Perl and might not have written this code in the best way. Any suggestions for improving the code are welcome.
Don't use the shell, then.
#! /usr/bin/perl
use warnings;
use strict;
use Cwd;
use POSIX qw/ strftime /;
my $date = localtime;
my $datef = strftime "%Y%m%d%H%M.%S", localtime;
my $pwd = getcwd;
The result is slightly different: the output of the date command contains a timezone, but the value of $date above will not. If this is a problem, follow the excellent suggestion by Chas. Owens below and use strftime to get the format you want.
Your sub
sub touchandmail {
`touch $cache -t "$datef"`;
`echo "$msg" | mail -s "$subject" $owner -c $sendto`;
}
will fail silently if something goes wrong. Silent failures are nasty. Better would be code along the lines of
sub touchandmail {
system("touch", "-t", $datef, $cache) == 0
or die "$0: touch exited " . ($? >> 8);
open my $fh, "|-", "mail", "-s", $subject, $owner, "-c", $sendto
or die "$0: could not start mail: $!";
print $fh $msg
or warn "$0: print: $!";
unless (close $fh) {
if ($! == 0) {
die "$0: mail exited " . ($? >> 8);
}
else {
die "$0: close: $!";
}
}
}
Using system rather than backticks is more expressive of your intent because backticks are for capturing output. The system(LIST) form bypasses the shell and having to worry about quoting arguments.
Getting the effect of the shell pipeline echo ... | mail ... without the shell means we have to do a bit of the plumbing work ourselves, but the benefit—as with system(LIST)—is not having to worry about shell quoting. The code above uses many-argument open:
For three or more arguments if MODE is '|-', the filename is interpreted as a command to which output is to be piped, and if MODE is '-|', the filename is interpreted as a command that pipes output to us. In the two-argument (and one-argument) form, one should replace dash ('-') with the command. See Using open for IPC in perlipc for more examples of this.
The open above forks a mail process, and $fh is connected to its standard input. The parent process (the code still running touchandmail) performs the role of echo with print $fh $msg. Calling close flushes the handle's I/O buffers plus a little extra because of how we opened it:
If the filehandle came from a piped open, close returns false if one of the other syscalls involved fails or if its program exits with non-zero status. If the only problem was that the program exited non-zero, $! will be set to 0. Closing a pipe also waits for the process executing on the pipe to exit—in case you wish to look at the output of the pipe afterwards—and implicitly puts the exit status value of that command into $? and ${^CHILD_ERROR_NATIVE}.
More generally, the IO::All module does indeed provide the equivalent of an autochomp:
use IO::All;
# for getting command output:
my #date = io("date|")->chomp->slurp;
#$date[0] contains the chomped first line of the output
or more generally:
my $fh = io("file")->chomp->tie;
while (<$fh>) {
# no need to chomp here ! $_ is pre-chomped
}
Granted, for this particular case of date I would agree with the other answerers that you are probably better off using one of the DateTime modules, but if you are simply reading in a file and want all your lines to be chomped, then IO::All with the chomp and tie options applied is very convenient.
Note also that the chomp trick doesn't work when slurping the entire contents of the handle into a scalar directly (that's just the way it is implemented).
Try putting it into a function:
sub autochomp {
my $command = shift;
my $retval = `$command`;
chomp $retval;
return $retval;
}
And then call that for each command you want to execute and then chomp.
Use DateTime or other of the date modules on CPAN instead of the date utility.
For example:
use DateTime;
my $dt = DateTime->now;
print $dt->strftime('%Y%m%d%H%M.%S');
It is possible to assign and chomp in a single line using the following syntax:
chomp ( my $date = `date` );
As for speaking more Perlishly, if you find yourself repeating the same thing over and over again, roll it into a sub:
sub assign_and_chomp {
my #result;
foreach my $cmd (#_) {
chomp ( my $chomped = $cmd );
push #result, $chomped;
}
return #result;
}
my ( $date , $datef , $pwd )
= assign_and_chomp ( `date` , `date +%Y%m%d%H%M.%S` , `pwd` );