awk output to variable and change directory - perl

In the below script. am not able to change the directory.i need the output like above 70% disk inside that directory which one is consuming more space.
#!/usr/bin/perl
use strict;
use warnings;
my $test=qx("df -h |awk \+\$5>=70 {print \$6} ");
chdir($test) or die "$!";
print $test;
system("du -sh * | grep 'G'");

No need to call awk in your case because Perl is quite good at splitting and printing certain lines itself. Your code has some issues:
The code qx("df -h |awk \+\$5>=70 {print \$6} ") tries to execute the string "df -h | awk ..." as a command which fails because there is no such command called "df -h | awk". When I run that code I get sh: 1: df -h |awk +>=70 {print } : not found. You can fix that by dropping the quotes " because qx() already is quoting. The variable $test is empty afterwards, so the chdir changes to your $HOME directory.
Then you'll see the next error: awk: line 1: syntax error at or near end of line, because it calls awk +\$5>=70 {print \$6}. Correct would be awk '+\$5>=70 {print \$6}', i.e. with ticks ' around the awk scriptlet.
As stated in a comment, df -h splits long lines into two lines. Example:
Filesystem 1K-blocks Used Available Use% Mounted on
/long/and/possibly/remote/file/system
10735331328 10597534720 137796608 99% /local/directory
Use df -hP to get guaranteed column order and one line output.
The last system call shows the directory usage (space) for all lines containing the letter G. I reckon that's not exactly what you want.
I suggest the following Perl script:
#!/usr/bin/env perl
use strict;
use warnings;
foreach my $line ( qx(df -hP) ) {
my ($fs, $size, $used, $avail, $use, $target) = split(/\s+/, $line);
next unless ($use =~ /^\d+\s*\%$/); # skip header line
# now $use is e.g. '90%' and we drop the '%' sign:
$use =~ s/\%$//;
if ($use > 70) {
print "almost full: $target; top 5 directories:\n";
# no need to chdir here. Simply use $target/* as search pattern,
# reverse-sort by "human readable" numbers, and show the top 5:
system("du -hs $target/* 2>/dev/null | sort -hr | head -5");
print "\n\n";
}
}

#!/usr/bin/perl
use strict;
use warnings;
my #bigd = map { my #f = split " "; $f[5] }
grep { my #f = split " "; $f[4] =~ /^(\d+)/ && $1 >= 70}
split "\n", `df -hP`;
print "big directories: $_\n" for #bigd;
for my $bigd (#bigd) {
chdir($bigd);
my #bigsubd = grep { my #f = split " "; $f[0] =~ /G/ }
split "\n", `du -sh *`;
print "big subdirectories in $bigd:\n";
print "$_\n" for #bigsubd;
}
I belive you wanted to do something like this.

Related

Print list on a single line from a pipe

I have a list of PCs and I need to append quotes and commas to each of them so that I can do a SQL query
List example
Row1|PCName|PC1.local
Row2|PCName|PC2.local
Row3|PCName|PC3.local
and I need to get this
"PC1.local", "PC2.local", "PC3.local", ......
Here is what I tried
cat list.txt | awk -F\| '{print $NF}' | perl -e 'while(<>){ print "\"$_\", ";}'
I get this
", "PC1.local
", "PC2.local
", "PC3.local
", "
How can I make those PCs show up in a single line and with the format that I need?
I know using awk or perl might be overkill for this and it could be done using Perl alone or awk alone, but I'm interested in learning how to pipe things to Perl. How can I make Perl print those PC names in the format I need?
How about:
#!/usr/bin/env perl
use strict;
use warnings;
print join ",", map { chomp; '"'.(split /\|/)[2].'"' } <DATA> ;
__DATA__
Row1|PCName|PC1.local
Row2|PCName|PC2.local
Row3|PCName|PC3.local
Output:
"PC1.local","PC2.local","PC3.local"
As a one liner:
perl -e 'print join ",", map { s/\n//; q{"}.(split /\|/)[2].q{"} } <>'
$ awk -F'|' '{printf "%s\"%s\"", (NR>1?", ":""), $3} END{print ""}' file
"PC1.local", "PC2.local", "PC3.local"
with unix toolset
$ cut -d'|' -f3 file | sed 's/.*/"&"/' | paste -s -d,
extract third field, wrap with quotes, join with comma
Here's a Perl one-line solution
$ perl -le 'print join ", ", map { /([^|\s]+)$/ && qq{"$1"} } <>' myfile
output
"PC1.local", "PC2.local", "PC3.local"
#!perl
use strict;
use warnings;
while ( my $line = readline(*STDIN) ) {
chomp $line;
my #machines = split /\|/, $line;
print join(',', map { '"' . $_ . '"' } #machines), "\n";
}
Output:
$ cat list.txt | perl test.pl
"Row1","PCName","PC1.local"
"Row2","PCName","PC2.local"
"Row3","PCName","PC3.local"

running awk command in perl

I have a tab delimited file(dummy) that looks like this :
a b
a b
a c
a c
a b
I am trying to write an awk command inside the perl script in which the file.txt is being made.
The awk command :
$n=system(" awk -F"\t" '{if($1=="a" && $2=="b") print $1,$2}' file.txt|wc -l ")
Error :
comparison operator :error in '==' , ',' between $1 and $2 in print }'
The awk script is running fine on command line but giving error while running inside the script.
I don't see any syntax error in the awk command.
Aside from the fact that what are you trying to achieve by executing awk from within perl (since it could be accomplished using the latter itself), you could use the q operator:
$cmd = q(awk -F"\t" '{if($1=="a" && $2=="b") print $1,$2}' file.txt | wc -l);
$n = system($cmd);
Note that using double-quotes would interpolate variables and you'd need to escape those.
You can get the number of a\tbs from Perl itself without calling an external command:
#!/usr/bin/perl
use warnings;
use strict;
open my $FH, '<', 'file.txt' or die $!;
my $n = 0;
"a\tb\n" eq $_ and $n++ while <$FH>;
print "$n\n";

How to add blank line after every grep result using Perl?

How to add a blank line after every grep result?
For example, grep -o "xyz" may give something like -
file1:xyz
file2:xyz
file2:xyz2
file3:xyz
I want the output to be like this -
file1:xyz
file2:xyz
file2:xyz2
file3:xyz
I would like to do something like
grep "xyz" | perl (code to add a new line after every grep result)
This is the direct answer to your question:
grep 'xyz' | perl -pe 's/$/\n/'
But this is better:
perl -ne 'print "$_\n" if /xyz/'
EDIT
Ok, after your edit, you want (almost) this:
grep 'xyz' * | perl -pe 'print "\n" if /^([^:]+):/ && ! $seen{$1}++'
If you don’t like the blank line at the beginning, make it:
grep 'xyz' * | perl -pe 'print "\n" if /^([^:]+):/ && ! $seen{$1}++ && $. > 1'
NOTE: This won’t work right on filenames with colons in them. :)½
If you want to use perl, you could do something like
grep "xyz" | perl -p -e 's/(.*)/\1\n/g'
If you want to use sed (where I seem to have gotten better results), you could do something like
grep "xyz" | sed 's/.*/\0\n/g'
This prints a newline after every single line of grep output:
grep "xyz" | perl -pe 'print "\n"'
This prints a newline in between results from different files. (Answering the question as I read it.)
grep 'xyx' * | perl -pe '/(.*?):/; if ($f ne $1) {print "\n"; $f=$1}'
Use a state machine to determine when to print a blank line:
#!/usr/bin/env perl
use strict;
use warnings;
# state variable to determine when to print a blank line
my $prev_file = '';
# change DATA to the appropriate input file handle
while( my $line = <DATA> ){
# did the state change?
if( my ( $file ) = $line =~ m{ \A ([^:]*) \: .*? xyz }msx ){
# blank lines between states
print "\n" if $file ne $prev_file && length $prev_file;
# set the new state
$prev_file = $file;
}
# print every line
print $line;
}
__DATA__
file1:xyz
file2:xyz
file2:xyz2
file3:xyz

Grep to match all lines of patternfile (perl -e ok too)

I'm looking for a simple/elegant way to grep a file such that every returned line must match every line of a pattern file.
With input file
acb
bc
ca
bac
And pattern file
a
b
c
The command should return
acb
bac
I tried to do this with grep -f but that returns if it matches a single pattern in the file (and not all). I also tried something with a recursive call to perl -ne (foreach line of the pattern file, call perl -ne on the search file and try to grep in place) but I couldn't get the syntax parser to accept a call to perl from perl, so not sure if that's possible.
I thought there's probably a more elegant way to do this, so I thought I'd check. Thanks!
===UPDATE===
Thanks for your answers so far, sorry if I wasn't clear but I was hoping for just a one-line result (creating a script for this seems too heavy, just wanted something quick). I've been thinking about it some more and I came up with this so far:
perl -n -e 'chomp($_); print " | grep $_ "' pattern | xargs echo "cat input"
which prints
cat input | grep a | grep b | grep c
This string is what I want to execute, I just need to somehow execute it now. I tried an additional pipe to eval
perl -n -e 'chomp($_); print " | grep $_ "' pattern | xargs echo "cat input" | eval
Though that gives the message:
xargs: echo: terminated by signal 13
I'm not sure what that means?
One way using perl:
Content of input:
acb
bc
ca
bac
Content of pattern:
a
b
c
Content of script.pl:
use warnings;
use strict;
## Check arguments.
die qq[Usage: perl $0 <input-file> <pattern-file>\n] unless #ARGV == 2;
## Open files.
open my $pattern_fh, qq[<], pop #ARGV or die qq[ERROR: Cannot open pattern file: $!\n];
open my $input_fh, qq[<], pop #ARGV or die qq[ERROR: Cannot open input file: $!\n];
## Variable to save the regular expression.
my $str;
## Read patterns to match, and create a regex, with each string in a positive
## look-ahead.
while ( <$pattern_fh> ) {
chomp;
$str .= qq[(?=.*$_)];
}
my $regex = qr/$str/;
## Read each line of data and test if the regex matches.
while ( <$input_fh> ) {
chomp;
printf qq[%s\n], $_ if m/$regex/o;
}
Run it like:
perl script.pl input pattern
With following output:
acb
bac
Using Perl, I suggest you read all the patterns into an array and compile them. Then you can read through your input file using grep to make sure all of the regexes match.
The code looks like this
use strict;
use warnings;
open my $ptn, '<', 'pattern.txt' or die $!;
my #patterns = map { chomp(my $re = $_); qr/$re/; } grep /\S/, <$ptn>;
open my $in, '<', 'input.txt' or die $!;
while (my $line = <$in>) {
print $line unless grep { $line !~ $_ } #patterns;
}
output
acb
bac
Another way is to read all the input lines and then start filtering by each pattern:
#!/usr/bin/perl
use strict;
use warnings;
open my $in, '<', 'input.txt' or die $!;
my #matches = <$in>;
close $in;
open my $ptn, '<', 'pattern.txt' or die $!;
for my $pattern (<$ptn>) {
chomp($pattern);
#matches = grep(/$pattern/, #matches);
}
close $ptn;
print #matches;
output
acb
bac
Not grep and not a one liner...
MFILE=file.txt
PFILE=patterns
i=0
while read line; do
let i++
pattern=$(head -$i $PFILE | tail -1)
if [[ $line =~ $pattern ]]; then
echo $line
fi
# (or use sed instead of bash regex:
# echo $line | sed -n "/$pattern/p"
done < $MFILE
A bash(Linux) based solution
#!/bin/sh
INPUTFILE=input.txt #Your input file
PATTERNFILE=patterns.txt # file with patterns
# replace new line with '|' using awk
PATTERN=`awk 'NR==1{x=$0;next}NF{x=x"|"$0}END{print x}' "$PATTERNFILE"`
PATTERNCOUNT=`wc -l <"$PATTERNFILE"`
# build regex of style :(a|b|c){3,}
PATTERN="($PATTERN){$PATTERNCOUNT,}"
egrep "${PATTERN}" "${INPUTFILE}"
Here's a grep-only solution:
#!/bin/sh
foo ()
{
FIRST=1
cat pattern.txt | while read line; do
if [ $FIRST -eq 1 ]; then
FIRST=0
echo -n "grep \"$line\""
else
echo -n "$STRING | grep \"$line\""
fi
done
}
STRING=`foo`
eval "cat input.txt | $STRING"

What's the best way to convert "awk '{print $2 >> $1}' file" in a Perl one-liner?

How could I convert:
awk '{print $2 >> $1}' file
in a short Perl one-liner?
"file" could look like this:
fruit banana
vegetable beetroot
vegetable carrot
mushroom chanterelle
fruit apple
there may some other ways, but here's what i can think of
perl -ane 'open(FILE,">>",$F[0]); print FILE $F[1];close(FILE);' file
I guess awk has to be better at some things :-)
This is right at the limit of what I'd do on the command line, but it avoids reopening filehandles.
$ perl -lane '$fh{$F[0]} || open $fh{$F[0]}, ">>", $F[0]; print {$fh{$F[0]}} $F[1]' file
Not pure Perl, but you can do:
perl -nae '`echo $F[1] >> $F[0]`' input_file
This is what a2p <<< '{print $2 >> $1}' produces
#!/usr/bin/perl
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if $running_under_some_shell;
# this emulates #! processing on NIH machines.
# (remove #! line above if indigestible)
eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift;
# process any FOO=bar switches
$, = ' '; # set output field separator
$\ = "\n"; # set output record separator
while (<>) {
($Fld1,$Fld2) = split(' ', $_, -1);
&Pick('>>', $Fld1) &&
(print $fh $Fld2);
}
sub Pick {
local($mode,$name,$pipe) = #_;
$fh = $name;
open($name,$mode.$name.$pipe) unless $opened{$name}++;
}