I have an some attributes with values stored in an array as below, now i need to perform some checks on attribute values,Suggest me how can i proceed in perl.
#arr1 = `cat passwd.txt|tr ' ' '\n'|egrep -i "maxage|minage"|sort'`;
array arr1 contains info as "maxage=0 minage=0"
In this i need to perform if condition on the value of maxage, is there any way like below, suggest me as i am new to perl.
if ( #arr1[0]|awk -F= '{print $2}' == 0 )
{
printf "Then print task done";
}
You can do the whole process in Perl. For example:
use feature qw(say);
use strict;
use warnings;
my $fn = 'passwd.txt';
open ( my $fh, '<', $fn ) or die "Could not open file '$fn': $!";
my #arr1 = sort grep /maxage|minage/i, split ' ', <$fh>;
close $fh;
if ( (split /=/, shift #arr1)[1] == 0) {
say "done";
}
Can you try this?
#arr1 = `cat passwd.txt | tr ' ' '\n' | grep -i "maxage|minage"| sort`;
$x = `$arr1[0] | awk -F= {print $2}`; // maybe $3 is true index
if ( x == 0 )
{
print "Then print task done";
}
Related
I have a file with several blocks of text separated by blank line. Ex.:
block1
block1
block2
block3
block3
I need a solution with sed, awk or Perl to locate the first blank line and redirect the previous block to another file and so on until the end of the file.
I have this command in sed that locates the first block, but not the rest:
sed -e '/./!Q'
Can someone help me?
give this line a try:
awk -v RS="" '{print > "file"++c".txt"}' input
it will generate file1...n.txt
Here's an awk:
$ awk 'BEGIN{file="file"++cont}/^$/{file="file"++cont;next}{print>file}' infile
Results
$ cat file1
block1
block1
$ cat file2
block2
$ cat file3
block3
block3
taking into account several empty string between block
awk '/./{if(!L)++C;print>"Out"C".txt"}{L=$0!~/^$/}' YourFile
Sed will not allow different external files (unspecified number of in fact) as output
Here's the solution in Perl
open( my $fh, '<', '/tmp/a.txt' ) or die $!;
{
## record delimiter
local $/ = "\n\n";
my $count = 1;
while ( chomp( my $block = <$fh> ) ) {
open( my $ofh, '>', sprintf( '/tmp/file%d', $count++ ) ) or die $!;
print {$ofh} $block;
close($ofh);
}
}
close($fh);
Here's my solution in Perl:
#!/usr/bin/perl
use strict;
use warnings;
my $n = 0;
my $block = '';
while (<DATA>) { # line gets stored in $_
if (/^\s*$/) { # blank line
write_to_file( 'file' . ++$n, $block );
$block = '';
} else {
$block .= $_;
}
}
# Write any remaining lines
write_to_file( 'file' . ++$n, $block );
sub write_to_file {
my $file = shift;
my $data = shift;
open my $fh, '>', $file or die $!;
print $fh $data;
close $fh;
}
__DATA__
block1
block1
block2
block3
block3
Output:
$ grep . file*
file1:block1
file1:block1
file2:block2
file3:block3
file3:block3
Another way to do it in Perl:
#!/usr/bin/perl
use strict;
use warnings;
# store all lines in $data
my $data = do { local $/; <DATA> };
my #blocks = split /\n\n/, $data;
my $n = 0;
write_to_file( 'file' . ++$n, $_ ) for #blocks;
sub write_to_file {
my $file = shift;
my $data = shift;
open my $fh, '>', $file or die $!;
print $fh $data;
close $fh;
}
__DATA__
block1
block1
block2
block3
block3
This might work for you (GNU csplit & sed):
csplit -qf uniqueFileName file '/^$/' '{*}' && sed -i '/^$/d' uniqueFileName*
or if you want to go with the defaults:
csplit -q file '/^$/' '{*}' && sed -i '/^$/d' xx*
Use:
tail -n+1 xx* # to check the results
I'm looking for a way to match two terms in a single string. For instance if I need to match both "foo" and "bar" in order for the string to match and be printed, and the string is "foo 121242Z AUTO 123456KT 8SM M10/M09 SLP02369", it would not match. But if the string was "foo 121242Z AUTO 123456KT 8SM bar M10/M09 SLP02369", it would match and then go on to be printed. Here's the code that I have currently but I am a bit stuck. Thanks!
use strict;
use warnings;
use File::Find;
use Cwd;
my #folder = ("/d2/aschwa/archive_project/METAR_data/");
open(OUT , '>', 'TEKGEZ_METARS.txt') or die "Could not open $!";
print OUT "Date (YYYYMMDD), Station, Day/Time, Obs Type, Wind/Gust (Kt), Vis (SM),
Sky, T/Td (C), Alt, Rmk\n";
print STDOUT "Finding METAR files\n";
my $criteria = sub {if(-e && /^/) {
open(my $file,$_) or die "Could not open $_ $!\n";
my $dir = getcwd;
my #dirs = split ('/', $dir);
while(<$file>) {
$_ =~ tr/\015//d;
print OUT $dirs[-1], ' ', $_ if /foo?.*bar/;
}
}
};
find($criteria, #folder);
close OUT;
print STDOUT "Done Finding Station METARS\n";
Why not just simple:
perl -ne'print if /foo.*bar/'
If you want process more files from some directory use find
find /d2/aschwa/archive_project/METAR_data/ -type f -exec perl -MFile::Spec -ne'BEGIN{$dir = (File::Spec->splitdir($ARGV[0]))[-2]} print $dir, ' ', $_ if /foo.*bar/' {} \; > TEKGEZ_METARS.txt
You can achieve it with positive look-ahead for both strings:
print OUT $dirs[-1], ' ', $_ if m/(?=.*foo)(?=.*bar)/;
#!/usr/bin/perl
use warnings;
use strict;
my $string1 = "foo 121242Z AUTO 123456KT 8SM M10/M09 SLP02369";
my $string2 = "foo 121242Z AUTO 123456KT 8SM bar M10/M09 SLP02369";
my #array = split(/\s+/, $string2);
my $count = 0;
foreach (#array){
$count++ if /foo/;
$count++ if /bar/;
}
print join(" ", #array), "\n" if $count == 2;
This will print for $string2, but not for $string1
I want to get the unique elements (lines) from a file which I will further send through email.
I have tried 2 methods but both are not working:
1st way:
my #array = "/tmp/myfile.$device";
my %seen = ();
my $file = grep { ! $seen{ $_ }++ } #array;
2nd way :
my $filename = "/tmp/myfile.$device";
cat $filename |sort | uniq > $file
How can I do it?
You seem to have forgotten to read the file!
open(my $fh, '<', $file_name)
or die("Can't open \"$file_name\": $!\n");
my %seen;
my #unique = grep !$seen{$_}++, <$fh>;
You need to open the file and read it.
"cat" is a shell command not perl
Try something like this
my $F;
die $! if(!open($F,"/tmp/myfile.$device"));
my #array = <$F>;
my %seen = (); my $file = grep { ! $seen{ $_ }++ } #array;
The die $! will stop the program with an error if the file doesn't open correctly;
#array=<$F> reads all the data from the file $F opened above into the array.
If you rig the argument list, you can make Perl open the file automatically, using:
perl -n -e 'BEGIN{#ARGV=("/tmp/myfile.device");} print if $count{$_}++ == 0;'
I'm parsing the sourcecode of many websites, an entire huge web with thousands of pages. Now I want to search for stuff in perĺ, I want to find the number of occurrences of a keyword.
For parsing the webpages I use curl and pipe the output to "grep -c" which doesn't work, so I want to use perl. Can be perl utilised completely to crawl a page?
E.g.
cat RawJSpiderOutput.txt | grep parsed | awk -F " " '{print $2}' | xargs -I replaceStr curl replaceStr?myPara=en | perl -lne '$c++while/myKeywordToSearchFor/g;END{print$c}'
Explanation: In the textfile above I have usable and unusable URLs. With "Grep parsed" I fetch the usable URLs. With awk I select the 2nd column with contains the pure usable URL. So far so good. Now to this question: With Curl I fetch the source (appending some parameter, too) and pipe the whole source code of each page to perl in order to count "myKeywordToSearchFor" occurrences. I would love to do this in perl only if it is possible.
Thanks!
This uses Perl only (untested):
use strict;
use warnings;
use File::Fetch;
my $count;
open my $SPIDER, '<', 'RawJSpiderOutput.txt' or die $!;
while (<$SPIDER>) {
chomp;
if (/parsed/) {
my $url = (split)[1];
$url .= '?myPara=en';
my $ff = File::Fetch->new(uri => $url);
$ff->fetch or die $ff->error;
my $fetched = $ff->output_file;
open my $FETCHED, '<', $fetched or die $!;
while (<$FETCHED>) {
$count++ if /myKeyword/;
}
unlink $fetched;
}
}
print "$count\n";
Try something more like,
perl -e 'while(<>){my #words = split ' ';for my $word(#words){if(/myKeyword/){++$c}}} print "$c\n"'
i.e.
while (<>) # as long as we're getting input (into “$_”)
{ my #words = split ' '; # split $_ (implicit) into whitespace, so we examine each word
for my $word (#words) # (and don't miss two keywords on one line)
{ if (/myKeyword/) # whenever it's found,
{ ++$c } } } # increment the counter (auto-vivified)
print "$c\n" # and after end of file is reached, print the counter
or, spelled out strict-like
use strict;
my $count = 0;
while (my $line = <STDIN>) # except that <> is actually more magical than this
{ my #words = split ' ' => $line;
for my $word (#words)
{ ++$count; } } }
print "$count\n";
I have a simple .csv file that has that I want to extract data out of a write to a new file.
I to write a script that reads in a file, reads each line, then splits and structures the columns in a different order, and if the line in the .csv contains 'xxx' - dont output the line to output file.
I have already managed to read in a file, and create a secondary file, however am new to Perl and still trying to work out the commands, the following is a test script I wrote to get to grips with Perl and was wondering if I could aulter this to to what I need?-
open (FILE, "c1.csv") || die "couldn't open the file!";
open (F1, ">c2.csv") || die "couldn't open the file!";
#print "start\n";
sub trim($);
sub trim($)
{
my $string = shift;
$string =~ s/^\s+//;
$string =~ s/\s+$//;
return $string;
}
$a = 0;
$b = 0;
while ($line=<FILE>)
{
chop($line);
if ($line =~ /xxx/)
{
$addr = $line;
$post = substr($line, length($line)-18,8);
}
$a = $a + 1;
}
print $b;
print " end\n";
Any help is much appreciated.
To manipulate CSV files it is better to use one of the available modules at CPAN. I like Text::CSV:
use Text::CSV;
my $csv = Text::CSV->new ({ binary => 1, empty_is_undef => 1 }) or die "Cannot use CSV: ".Text::CSV->error_diag ();
open my $fh, "<", 'c1.csv' or die "ERROR: $!";
$csv->column_names('field1', 'field2');
while ( my $l = $csv->getline_hr($fh)) {
next if ($l->{'field1'} =~ /xxx/);
printf "Field1: %s Field2: %s\n", $l->{'field1'}, $l->{'field2'}
}
close $fh;
If you need do this only once, so don't need the program later you can do it with oneliner:
perl -F, -lane 'next if /xxx/; #n=map { s/(^\s*|\s*$)//g;$_ } #F; print join(",", (map{$n[$_]} qw(2 0 1)));'
Breakdown:
perl -F, -lane
^^^ ^ <- split lines at ',' and store fields into array #F
next if /xxx/; #skip lines what contain xxx
#n=map { s/(^\s*|\s*$)//g;$_ } #F;
#trim spaces from the beginning and end of each field
#and store the result into new array #n
print join(",", (map{$n[$_]} qw(2 0 1)));
#recombine array #n into new order - here 2 0 1
#join them with comma
#print
Of course, for the repeated use, or in a bigger project you should use some CPAN module. And the above oneliner has much cavetas too.