I have a directory with a list of image header files of the format
image1.hd
image2.hd
image3.hd
image4.hd
I want to search for the regular expression Image type:=4 in the directory and find the file number which has the first occurrence of this pattern. I can do this with a couple of pipes easily in bash:
grep -l 'Image type:=4' image*.hd | sed ' s/.*image\(.*\).hd/\1/' | head -n1
which returns 1 in this case.
This pattern match will be used in a perl script. I know I could use
my $number = `grep -l 'Image type:=4' image*.hd | sed ' s/.*image\(.*\).hd/\1/' | head -n1`
but is it preferable to use pure perl in such cases? Here is the best I could come up with using perl. It is very cumbersome:
my $tmp;
#want to find the planar study in current study
foreach (glob "$DIR/image*.hd"){
$tmp = $_;
open FILE, "<", "$_" or die $!;
while (<FILE>)
{
if (/Image type:=4/){
$tmp =~ s/.*image(\d+).hd/$1/;
}
}
close FILE;
last;
}
print "$tmp\n";
this also returns the desired output of 1. Is there a more effective way of doing this?
This is simple with the help of a couple of utility modules
use strict;
use warnings;
use File::Slurp 'read_file';
use List::MoreUtils 'firstval';
print firstval { read_file($_) =~ /Image type:=4/ } glob "$DIR/image*.hd";
But if you are restricted to core Perl, then this will do what you want
use strict;
use warnings;
my $firstfile;
while (my $file = glob 'E:\Perl\source\*.pl') {
open my $fh, '<', $file or die $!;
local $/;
if ( <$fh> =~ /Image type:=4/) {
$firstfile = $file;
last;
}
}
print $firstfile // 'undef';
Related
I have the results of a program which gives me the results from some search giving me 2000+ file txt archives. I just need a specific line in each file, this is what I have been trying with Perl:
opendir(DIR, $dirname) or die "Could not open $dirname\n";
while ($filename = readdir(DIR)) {
print "$filename\n";
open ($filename, '<', $filename)or die("Could not open file.");
my $line;
while( <$filename> ) {
if( $. == $27 ) {
print "$line\n";
last;
}
}
}
closedir(DIR);
But there is a problem with the $filename in line 5 and I don't know an alternative to it so I don't have to manually name each file.
Several issues with that code:
Using an old-school bareword identifier for the directory handle instead of a autovivified variable like you are for the file handle.
Using the same variable for the filename and file handle is pretty strange.
You don't check to see if the file is a directory or something else other than a plain file before trying to open it.
$27?
You never assign anything to that $line variable before printing it.
Unless $directory is your program's current working directory, you're running into an issue mentioned in the readdir documentation
If you're planning to filetest the return values out of a readdir, you'd better prepend the directory in question. Otherwise, because we didn't chdir there, it would have been testing the wrong file.
(Substitute open for filetest)
Always use strict; and use warnings;.
Personally, if you just want to print the 27th line of a large number of files, I'd turn to awk and find (Using its -exec test to avoid potential errors about the command line maximum length being hit):
find directory/ -maxdepth 1 -type -f -exec awk 'FNR == 27 { print FILENAME; print }' \{\} \+
If you're on a Windows system without standard unix tools like those installed, or it's part of a bigger program, a fixed up perl way:
#!/usr/bin/env perl
use strict;
use warnings;
use autodie;
use feature qw/say/;
use File::Spec;
my $directory = shift;
opendir(my $dh, $directory);
while (my $filename = readdir $dh) {
my $fullname = File::Spec->catfile($directory, $filename); # Construct a full path to the file
next unless -f $fullname; # Only look at regular files
open my $fh, "<", $fullname;
while (my $line = <$fh>) {
if ($. == 27) {
say $fullname;
print $line;
last;
}
}
close $fh;
}
closedir $dh;
You might also consider using glob to get the filenames instead of opendir/readdir/closedir.
And if you have Path::Tiny available, a simpler version is:
#!/usr/bin/env perl
use strict;
use warnings;
use autodie;
use feature qw/say/;
use Path::Tiny;
my $directory = shift;
my $dir = path $directory;
for my $file ($dir->children) {
next unless -f $file;
my #lines = $file->lines({count => 27});
if (#lines == 27) {
say $file;
print $lines[-1];
}
}
I have a script which pulls all the pm files in my directory and look for certain pattern and change them to desired value, i tried Tie::File but it's not looking to content of the file
use File::Find;
use Data::Dumper qw(Dumper);
use Tie::File;
my #content;
find( \&wanted, '/home/idiotonperl/project/');
sub wanted {
push #content, $File::Find::name;
return;
}
my #content1 = grep{$_ =~ /.*.pm/} #content;
#content = #content1;
for my $absolute_path (#content) {
my #array='';
print $absolute_path;
tie #array, 'Tie::File', $absolute_path or die qq{Not working};
print Dumper #array;
foreach my $line(#array) {
$line=~s/PERL/perl/g;
}
untie #array;
}
the output is
Not working at tiereplacer.pl line 22.
/home/idiotonperl/main/content.pm
this is not working as intended(looking into the content of all pm file), if i try to do the same operation for some test file under my home for single file, the content is getting replaced
#content = ‘home/idiotonperl/option.pm’
it’s working as intended
I would not recommend to use tie for that. This simple code below should do as asked
use warnings;
use strict;
use File::Copy qw(move);
use File::Glob ':bsd_glob';
my $dir = '/home/...';
my #pm_files = grep { -f } glob "$dir/*.pm";
foreach my $file (#pm_files)
{
my $outfile = 'new_' . $file; # but better use File::Temp
open my $fh, '<', $file or die "Can't open $file: $!";
open my $fh_out, '>', $outfile or die "Can't open $outfile: $!";
while (my $line = <$fh>)
{
$line =~ s/PERL/perl/g;
print $fh_out $line; # write out the line, changed or not
}
close $fh;
close $fh_out;
# Uncomment after testing, to actually overwrite the original file
#move $outfile, $file or die "Can't move $outfile to $file: $!";
}
The glob from File::Glob allows you to specify filenames similarly as in the shell. See docs for accepted metacharacters. The :bsd_glob is better for treatment of spaces in filenames. †
If you need to process files recursively then you indeed want a module. See File::Find::Rule
The rest of the code does what we must do when changing file content: copy the file. The loop reads each line, changes the ones that match, and writes each line to another file. If the match fails then s/ makes no changes to $line, so we just copy those that are unchanged.
In the end we move that file to overwrite the original using File::Copy.
The new file is temporary and I suggest to create it using File::Temp.
† The glob pattern "$dir/..." allows for an injection bug for directories with particular names. While this is very unusual it is safer to use the escape sequence
my #pm_files = grep { -f } glob "\Q$dir\E/*.pm";
In this case File::Glob isn't needed since \Q escapes spaces as well.
Solution using my favorite module: Path::Tiny. Unfortunately, it isn't a core module.
use strict;
use warnings;
use Path::Tiny;
my $iter = path('/some/path')->iterator({recurse => 1});
while( my $p = $iter->() ) {
next unless $p->is_file && $p =~ /\.pm\z/i;
$p->edit_lines(sub {
s/PERL/perl/;
#add more line-editing
});
#also check the path(...)->edit(...) as an alternative
}
Working fine for me:
#!/usr/bin/env perl
use common::sense;
use File::Find;
use Tie::File;
my #content;
find(\&wanted, '/home/mishkin/test/t/');
sub wanted {
push #content, $File::Find::name;
return;
}
#content = grep{$_ =~ /.*\.pm$/} #content;
for my $absolute_path (#content) {
my #array='';
say $absolute_path;
tie #array, 'Tie::File', $absolute_path or die "Not working: $!";
for my $line (#array) {
$line =~ s/PERL/perl/g;
}
untie #array;
}
in terminal with perl how can I search all php files starting recursive from current working directory for a single or multiline pattern like:
<script>var a=''; * hamoorabi.com * </script>
Read like: find all between <script>var a=''; and </script> but only if contains hamoorabi.com and replace it with an empty string (remove it).
As it´s javascript code there can be a bunch of unescaped characters inside the search string.
From a unix or cwygin prompt:
$ find . | grep .php | xargs ./xx1.pl
Where perl script xx1.pl is :
#!/usr/bin/perl
use strict;
use warnings;
undef $/;
for (#ARGV) {
open(FILE,$_);
my $content = <FILE>;
close(FILE);
my $beginning = '<script>var a=\'\'';
my $end = '</script>';
my $containing = 'hamoorabi.com';
#$content =~ s/(<script>.*?)(hamoorabi.com)(.*?<\/script>)/$1$3/sg;
#much better regex provided by ysth
$content =~ s/\Q$beginning\E(?:(?!\Q$end\E).)*\Q$containing\E.*?\Q$end\E//gs;
open(FILE,">$_");
print FILE $content;
close(FILE);
}
Use File::Find to walk a directory tree looking for files.
use strict;
use warnings;
use File::Find;
sub wanted {
# Ignore anything that isn't a file
return unless -f;
# Ignore anything without a .php extension
return unless /\.php$/;
# Your filename is in $_. Your current directory is the
# one which contains the current file.
# Do what you need to do.
open my $fh, '<', $_ or die $!;
...;
}
I hope this is also helps use further more.
use strict;
use warnings;
String replacement Regex forms
my $str = "<script>var a=''\; * hamoorabi.com * </script>";
$str=~s#<script[^>]*>((?:(?!<\/script>).)*)<\/script># my $script=$&;
$script=~s/^(.*)hamoorabi\.com(.*)$/$1$2/g;
($script);
#esg;
print $str;
Replace the content on the same file using Tie::File
my #array;
use Tie::File;
tie #array, 'Tie::File', "Input.php" || die "blabla";
my $len = join "\n", #array;
#array = split/\n/, $len;
untie #array;
My program is trying to search a string from multiple files in a directory. The code searches for single patterns like perl but fails to search a long string like Status Code 1.
Can you please let me know how to search for strings with multiple words?
#!/usr/bin/perl
my #list = `find /home/ad -type f -mtime -1`;
# printf("Lsit is $list[1]\n");
foreach (#list) {
# print("Now is : $_");
open(FILE, $_);
$_ = <FILE>;
close(FILE);
unless ($_ =~ /perl/) { # works, but fails to find string "Status Code 1"
print "found\n";
my $filename = 'report.txt';
open(my $fh, '>>', $filename) or die "Could not open file '$filename' $!";
say $fh "My first report generated by perl";
close $fh;
} # end unless
} # end For
There are a number of problems with your code
You must always use strict and use warnings at the top of every Perl program. There is little point in delcaring anything with my without strict in place
The lines returned by the find command will have a newline at the end which must be removed before Perl can find the files
You should use lexical file handles (my $fh instead of FILE) and the three-parameter form of open as you do with your output file
$_ = <FILE> reads only the first line of the file into $_
unless ($_ =~ /perl/) is inverted logic, and there's no need to specify $_ as it is the default. You should write if ( /perl/ )
You can't use say unless you have use feature 'say' at the top of your program (or use 5.010, which adds all features available in Perl v5.10)
It is also best to avoid using shell commands as Perl is more than able to do anything that you can using command line utilities. In this case -f $file is a test that returns true if the file is a plain file, and -M $file returns the (floating point) number of days since the file's modification time
This is how I would write your program
#!/usr/bin/perl
use strict;
use warnings;
use 5.010;
for my $file ( glob '/home/ad/*' ) {
next unless -f $file and int(-M $file) == 1;
open my $fh, '<', $file or die $!;
while ( <$fh> ) {
if ( /perl/ ) {
print "found\n";
my $filename = 'report.txt';
open my $out_fh, '>>', $filename or die "Could not open file '$filename': $!";
say $fh "My first report generated by perl";
close $out_fh;
last;
}
}
}
it should have matched unless $_ contains text in different case.
try this.
unless($_ =~ /Status\s+Code\s+1/i) {
Change
unless ($_ =~ /perl/) {
to:
unless ($_ =~ /(Status Code 1)/) {
I am certain the above works, except it's case sensitive.
Since you question it, I rewrote your script to make more sense of what you're trying to accomplish and implement the above suggestion. Correct me if I am wrong, but you're trying to make a script which matches "Status Code 1" in a bunch of files where last modified within 1 day and print the filename to a text file.
Anyways, below is what I recommend:
#!/usr/bin/perl
use strict;
use warnings;
my $output_file = 'report.txt';
my #list = `find /home/ad -type f -mtime -1`;
foreach my $filename (#list) {
print "PROCESSING: $filename";
open (INCOMING, "<$filename") || die "FATAL: Could not open '$filename' $!";
foreach my $line (<INCOMING>) {
if ($line =~ /(Status Code 1)/) {
open( FILE, ">>$output_file") or die "FATAL: Could not open '$output_file' $!";
print FILE sprintf ("%s\n", $filename);
close(FILE) || die "FATAL: Could not CLOSE '$output_file' $!";
# Bail when we get the first match
last;
}
}
close(INCOMING) || die "FATAL: Could not close '$filename' $!";
}
I'm trying to select only the .log files in my directory and then search in those files for the word "unbound" and print the entire line into a new output file with the same name as the log file (number###.log) but with a .txt extension. This is what I have so far:
#!/usr/bin/perl
use strict;
use warnings;
my $path = $ARGV[0];
my $outpath = $ARGV[1];
my #files;
my $files;
opendir(DIR,$path) or die "$!";
#files = grep { /\.log$/} readdir(DIR);
my #out;
my $out;
opendir(OUT,$outpath) or die "$!";
my $line;
foreach $files (#files) {
open (FILE, "$files");
my #line = <FILE>;
my $regex = Unbound;
open (OUT, ">>$out");
print grep {$line =~ /$regex/ } <>;
}
close OUT;
close FILE;
closedir(DIR);
closedir (OUT);
I'm a beginner, and I don't really know how to create a new text file with the acquired output.
Few things I'd suggest to improve this code:
declare your loop iterators within the loop. foreach my $file ( #files ) {
use 3 arg open: open ( my $input_fh, "<", $filename );
use glob rather than opendir then grep. foreach my $file ( <$path/*.txt> ) {
grep is good for extracting things into arrays. Your grep reads the whole file to print it, which isn't necessary. Doesn't matter much if the file is short though.
perltidy is great for reformatting code.
you're opening 'OUT' to a directory path (I think?) which isn't going to work.
$outpath isn't, it's a file. You need to do something different to output to different files. opendir isn't really valid to an output.
because you're using opendir that's actually giving you filenames - not full paths. So you might be in the wrong place to actually open the files. Prepending the path name, doing a chdir are possible solutions. But that's one of the reasons I like glob because it returns a path as well.
So with that in mind - how about:
#!/usr/bin/perl
use strict;
use warnings;
use File::Basename;
#Extract paths
my $input_path = $ARGV[0];
my $output_path = $ARGV[1];
#Error if paths are invalid.
unless (defined $input_path
and -d $input_path
and defined $output_path
and -d $output_path )
{
die "Usage: $0 <input_path> <output_path>\n";
}
foreach my $filename (<$input_path/*.log>) {
# extract the 'name' bit of the filename.
# be slightly careful with this - it's based
# on an assumption which isn't always true.
# File::Spec is a more powerful way of accomplishing this.
# but should grab 'number####' from /path/to/file/number####.log
my $output_file = basename ( $filename, '.log' );
#open input and output filehandles.
open( my $input_fh, "<", $filename ) or die $!;
open( my $output_fh, ">", "$output_path/$output_file.txt" ) or die $!;
print "Processing $filename -> $output_path/$output_file.txt\n";
#iterate input, extracting into $line
while ( my $line = <$input_fh> ) {
#check if $line matches your RE.
if ( $line =~ m/Unbound/ ) {
#write it to output.
print {$output_fh} $line;
}
}
#tidy up our filehandles. Although technically, they'll
#close automatically because they leave scope
close($output_fh);
close($input_fh);
}
Here is a script that takes advantage of Path::Tiny. Now, at this stage of your learning process, you are probably better off understanding #Sobrique's solution, but using modules such as Path::Tiny or Path::Class will make it easier to write these one off scripts more quickly, and correctly.
Also, I didn't really test this script, so watch out for bugs.
#!/usr/bin/env perl
use strict;
use warnings;
use Path::Tiny;
run(\#ARGV);
sub run {
my $argv = shift;
unless (#$argv == 2) {
die "Need source and destination paths\n";
}
my $it = path($argv->[0])->realpath->iterator({
recurse => 0,
follow_symlinks => 0,
});
my $outdir = path($argv->[1])->realpath;
while (my $path = $it->()) {
next unless -f $path;
next unless $path =~ /[.]log\z/;
my $logfh = $path->openr;
my $outfile = $outdir->child($path->basename('.log') . '.txt');
my $outfh;
while (my $line = <$logfh>) {
next unless $line =~ /Unbound/;
unless ($outfh) {
$outfh = $outfile->openw;
}
print $outfh $line;
}
close $outfh
or die "Cannot close output '$outfile': $!";
}
}
Notes
realpath will croak if the path provided does not exist.
Similarly for openr and openw.
I am reading input files line-by-line to keep the memory footprint of the program independent of the sizes of input files.
I do not open the output file until I know I have a match to print to.
When matching a file extension using a regular expression pattern, keep in mind that \n is a valid character in Unix file names, and the $ anchor will match it.