Searching through file on Perl - perl

Okay so I need to read in a file, go through each line and find where there is the string ERROR. This is what I have so far:
open(LOGFILE, "input.txt") or die "Can't find file";
$title = <LOGFILE>;
$\=' ' ;
while (<>){
foreach $title(split){
while (/^ERROR/gm){
print "ERROR in line $.\n";
}
}
}
close LOGFILE;
So the problem that I have is that it only looks at the first word of each line. So if the input is
boo far ERROR
It won't register an error. any help would be greatly appreciated! I'm new to perl so please try and keeps things basic. Thanks!

This is a more elegant approach, and I fixed the regex issue. ^ matched the start of a line.
open(LOGFILE, "input.txt") or die "Can't find file";
while(<LOGFILE>) {
print "ERROR in line $.\n" if(/ERROR/);
}
close LOGFILE;
Or how about from the command line:
perl -n -e 'print "ERROR in line $.\n" if(/ERROR/);' input.txt
-n implicitly loops for all lines of input
-e executes a line of code
To output to a file:
perl -n -e 'print "ERROR in line $.\n" if(/ERROR/);' input.txt > output.txt
While this is a good/simple example of using Perl, if you're using a Unix shell, grep does what you want with no need for scripting (thanks to TLP in the OP comments):
grep -n ERROR input.txt > output.txt
This is actually prints the matching line itself, with its line number.

Of course it won't, because ^ in front of your regexp means "start of line". Remove it and it will catch ERROR anywhere. You shouldn't do any splitting tricks either. You need to find ERROR anywhere? Then just write so:
while (<>){
if (/ERROR/){
print "ERROR in line $.\n";
}
}

Related

How best (idiomatically) to fail perl script (run with -n/-p) when input file not found?

$ perl -pe 1 foo && echo ok
Can't open foo: No such file or directory.
ok
I'd really like the perl script to fail when the file does not exist. What's the "proper" way to make -p or -n fail when the input file does not exist?
The -p switch is just a shortcut for wrapping your code (the argument following -e) in this loop:
LINE:
while (<>) {
... # your program goes here
} continue {
print or die "-p destination: $!\n";
}
(-n is the same but without the continue block.)
The <> empty operator is equivalent to readline *ARGV, and that opens each argument in succession as a file to read from. There's no way to influence the error handling of that implicit open, but you can make the warning it emits fatal (note, this will also affect several warnings related to the -i switch):
perl -Mwarnings=FATAL,inplace -pe 1 foo && echo ok
Set a flag in the body of the loop, check the flag in the END block at the end of the oneliner.
perl -pe '$found = 1; ... ;END {die "No file found" unless $found}' -- file1 file2
Note that it only fails when no file was processed.
To report the problem when not all files have been found, you can use something like
perl -pe 'BEGIN{ $files = #ARGV} $found++ if eof; ... ;END {die "Some files not found" unless $files == $found}'

Tail command used in perl backticks

I'm trying to run a tail command from within a perl script using the usual backticks.
The section in my perl script is as follows:
$nexusTime += nexusUploadTime(`tail $log -n 5`);
So I'm trying to get the last 5 lines of this file but I'm getting the following error when the perl script finishes:
sh: line 1: -n: command not found
Even though when I run the command on the command line it is indeed successful and I can see the 5 lines from that particular.
Not sure what is going on here. Why it works from command line but through perl it won't recognize the -n option.
Anybody have any suggestions?
$log has an extraneous trailing newline, so you are executing
tail file.log
-n 5 # Tries to execute a program named "-n"
Fix:
chomp($log);
Note that you will run into problems if log $log contains shell meta characters (such as spaces). Fix:
use String::ShellQuote qw( shell_quote );
my $tail_cmd = shell_quote('tail', '-n', '5', '--', $log);
$nexusTime += nexusUploadTime(`$tail_cmd`);
ikegami pointed out your error, but I would recommend avoiding external commands whenever possible. They aren't portable and debugging them can be a pain, among other things. You can simulate tail with pure Perl code like this:
use strict;
use warnings;
use File::ReadBackwards;
sub tail {
my ($file, $num_lines) = #_;
my $bw = File::ReadBackwards->new($file) or die "Can't read '$file': $!";
my ($lines, $count);
while (defined(my $line = $bw->readline) && $num_lines > $count++) {
$lines .= $line;
}
$bw->close;
return $lines;
}
print tail('/usr/share/dict/words', 5);
Output
ZZZ
zZt
Zz
ZZ
zyzzyvas
Note that if you pass a file name containing a newline, this will fail with
Can't read 'foo
': No such file or directory at tail.pl line 10.
instead of the more cryptic
sh: line 1: -n: command not found
that you got from running the tail utility in backticks.
The answer to this question is to place the option -n 5 before the target file

Perl search for string and get the full line from text file

I want to search for a string and get the full line from a text file through Perl scripting.
So the text file will be like the following.
data-key-1,col-1.1,col-1.2
data-key-2,col-2.1,col-2.2
data-key-3,col-3.1,col-3.2
Here I want to apply data-key-1 as the search string and get the full line into a Perl variable.
Here I want the exact replacement of grep "data-key-1" data.csv in the shell.
Some syntax like the following worked while running in the console.
perl -wln -e 'print if /\bAPPLE\b/' your_file
But how can I place it in a script? With the perl keyword we can't put it into a script. Is there a way to avoid the loops?
If you'd know the command line options you are giving for your one-liner, you'd know exactly what to write inside your perl script. When you read a file, you need a loop. Choice of loop can yield different results performance wise. Using for loop to read a while is more expensive than using a while loop to read a file.
Your one-liner:
perl -wln -e 'print if /\bAPPLE\b/' your_file
is basically saying:
-w : Use warnings
-l : Chomp the newline character from each line before processing and place it back during printing.
-n : Create an implicit while(<>) { ... } loop to perform an action on each line
-e : Tell perl interpreter to execute the code that follows it.
print if /\bAPPLE\b/ to print entire line if line contains the word APPLE.
So to use the above inside a perl script, you'd do:
#!usr/bin/perl
use strict;
use warnings;
open my $fh, '<', 'your_file' or die "Cannot open file: $!\n";
while(<$fh>) {
my $line = $_ if /\bAPPLE\b/;
# do something with $line
}
chomp is not really required here because you are not doing anything with the line other then checking for an existence of a word.
open($file, "<filename");
while(<$file>) {
print $_ if ($_ =~ /^data-key-3,/);
}
use strict;
use warnings;
# the file name of your .csv file
my $file = 'data.csv';
# open the file for reading
open(FILE, "<$file") or
die("Could not open log file. $!\n");
#process line by line:
while(<FILE>) {
my($line) = $_;
# remove any trail space (the newline)
# not necessary, but again, good habit
chomp($line);
my #result = grep (/data-key-1/, $line);
push (#final, #result);
}
print #final;

Perl Script to Read File Line By Line and Run Command on Each Line

I found this perl script here which seems will work for my purposes. It opens a Unicode text file and reads each line so that a command can be run. But I cannot figure out how to run a certain ICU command on each line. Can someone help me out? The error I get is (largefile is the script name):
syntax error at ./largefile line 11, near "/ ."
Search pattern not terminated at ./largefile line 11.
#!/usr/bin/perl
use strict;
use warnings;
my $file = 'test.txt';
open my $info, $file or die "Could not open $file: $!";
while( my $line = <$info>) {
do
LD_LIBRARY_PATH=icu/source/lib/ ./a.out "$line" >> newtext.txt
done
}
close $info;
Basically I want to open a large text file and run the command (which normally runs from the command line...I think how I call this in the perl script is the problem, but I don't know how to fix it) "LD_LIBRARY_PATH=icu/source/lib/ ./a.out "$line" >> newtext.txt" on each line so that "newtext.txt" is then populated with all the lines after they have been processed by the script. The ICU part is breaking words for Khmer.
Any help would be much appreciated! I'm not much of a programmer... Thanks!
For executing terminal commands, the command needs to be in system(), hence change to
system("LD_LIBRARY_PATH=icu/source/lib/ ./a.out $line >> newtext.txt");
Have you tried backticks:
while (my $line = <$info>) {
`LD_LIBRARY_PATH=icu/source/lib/ ./a.out "$line" >> newtext.txt`
last if $. == 2;
}

How can I record changes made during in-place editing in Perl?

I've scripted up a simple ksh that calls a Perl program to find and replace in files.
The passed-in arg is the home directory:
perl -pi -e 's/find/replace/g' $1/*.html
It works great. However, I'd like to output all the changes to a log file. I've tried piping and redirecting and haven't been able to get it work. Any ideas?
Thanks,
Glenn
Something like this to send all changes to STDERR:
perl -pi -e '$old = $_; s/find/replace/g and warn "${ARGV}[$.]: $old $_"; close ARGV if eof' $1/*.html
Updated: Fixed $. on multiple files.
You can print to STDERR and redirect just the STDERR output to a file as below:
perl -pi -e 'chomp($prev=$_);s/find/replace/g and print STDERR "$ARGV - $.: $prev -> $_"; close ARGV if eof' $1/*.html 2> logfile.txt
edit: added the filename, and fixed line number display when multiple input files are used