I have a list of PCs and I need to append quotes and commas to each of them so that I can do a SQL query
List example
Row1|PCName|PC1.local
Row2|PCName|PC2.local
Row3|PCName|PC3.local
and I need to get this
"PC1.local", "PC2.local", "PC3.local", ......
Here is what I tried
cat list.txt | awk -F\| '{print $NF}' | perl -e 'while(<>){ print "\"$_\", ";}'
I get this
", "PC1.local
", "PC2.local
", "PC3.local
", "
How can I make those PCs show up in a single line and with the format that I need?
I know using awk or perl might be overkill for this and it could be done using Perl alone or awk alone, but I'm interested in learning how to pipe things to Perl. How can I make Perl print those PC names in the format I need?
How about:
#!/usr/bin/env perl
use strict;
use warnings;
print join ",", map { chomp; '"'.(split /\|/)[2].'"' } <DATA> ;
__DATA__
Row1|PCName|PC1.local
Row2|PCName|PC2.local
Row3|PCName|PC3.local
Output:
"PC1.local","PC2.local","PC3.local"
As a one liner:
perl -e 'print join ",", map { s/\n//; q{"}.(split /\|/)[2].q{"} } <>'
$ awk -F'|' '{printf "%s\"%s\"", (NR>1?", ":""), $3} END{print ""}' file
"PC1.local", "PC2.local", "PC3.local"
with unix toolset
$ cut -d'|' -f3 file | sed 's/.*/"&"/' | paste -s -d,
extract third field, wrap with quotes, join with comma
Here's a Perl one-line solution
$ perl -le 'print join ", ", map { /([^|\s]+)$/ && qq{"$1"} } <>' myfile
output
"PC1.local", "PC2.local", "PC3.local"
#!perl
use strict;
use warnings;
while ( my $line = readline(*STDIN) ) {
chomp $line;
my #machines = split /\|/, $line;
print join(',', map { '"' . $_ . '"' } #machines), "\n";
}
Output:
$ cat list.txt | perl test.pl
"Row1","PCName","PC1.local"
"Row2","PCName","PC2.local"
"Row3","PCName","PC3.local"
Related
In the below script. am not able to change the directory.i need the output like above 70% disk inside that directory which one is consuming more space.
#!/usr/bin/perl
use strict;
use warnings;
my $test=qx("df -h |awk \+\$5>=70 {print \$6} ");
chdir($test) or die "$!";
print $test;
system("du -sh * | grep 'G'");
No need to call awk in your case because Perl is quite good at splitting and printing certain lines itself. Your code has some issues:
The code qx("df -h |awk \+\$5>=70 {print \$6} ") tries to execute the string "df -h | awk ..." as a command which fails because there is no such command called "df -h | awk". When I run that code I get sh: 1: df -h |awk +>=70 {print } : not found. You can fix that by dropping the quotes " because qx() already is quoting. The variable $test is empty afterwards, so the chdir changes to your $HOME directory.
Then you'll see the next error: awk: line 1: syntax error at or near end of line, because it calls awk +\$5>=70 {print \$6}. Correct would be awk '+\$5>=70 {print \$6}', i.e. with ticks ' around the awk scriptlet.
As stated in a comment, df -h splits long lines into two lines. Example:
Filesystem 1K-blocks Used Available Use% Mounted on
/long/and/possibly/remote/file/system
10735331328 10597534720 137796608 99% /local/directory
Use df -hP to get guaranteed column order and one line output.
The last system call shows the directory usage (space) for all lines containing the letter G. I reckon that's not exactly what you want.
I suggest the following Perl script:
#!/usr/bin/env perl
use strict;
use warnings;
foreach my $line ( qx(df -hP) ) {
my ($fs, $size, $used, $avail, $use, $target) = split(/\s+/, $line);
next unless ($use =~ /^\d+\s*\%$/); # skip header line
# now $use is e.g. '90%' and we drop the '%' sign:
$use =~ s/\%$//;
if ($use > 70) {
print "almost full: $target; top 5 directories:\n";
# no need to chdir here. Simply use $target/* as search pattern,
# reverse-sort by "human readable" numbers, and show the top 5:
system("du -hs $target/* 2>/dev/null | sort -hr | head -5");
print "\n\n";
}
}
#!/usr/bin/perl
use strict;
use warnings;
my #bigd = map { my #f = split " "; $f[5] }
grep { my #f = split " "; $f[4] =~ /^(\d+)/ && $1 >= 70}
split "\n", `df -hP`;
print "big directories: $_\n" for #bigd;
for my $bigd (#bigd) {
chdir($bigd);
my #bigsubd = grep { my #f = split " "; $f[0] =~ /G/ }
split "\n", `du -sh *`;
print "big subdirectories in $bigd:\n";
print "$_\n" for #bigsubd;
}
I belive you wanted to do something like this.
How to add a blank line after every grep result?
For example, grep -o "xyz" may give something like -
file1:xyz
file2:xyz
file2:xyz2
file3:xyz
I want the output to be like this -
file1:xyz
file2:xyz
file2:xyz2
file3:xyz
I would like to do something like
grep "xyz" | perl (code to add a new line after every grep result)
This is the direct answer to your question:
grep 'xyz' | perl -pe 's/$/\n/'
But this is better:
perl -ne 'print "$_\n" if /xyz/'
EDIT
Ok, after your edit, you want (almost) this:
grep 'xyz' * | perl -pe 'print "\n" if /^([^:]+):/ && ! $seen{$1}++'
If you don’t like the blank line at the beginning, make it:
grep 'xyz' * | perl -pe 'print "\n" if /^([^:]+):/ && ! $seen{$1}++ && $. > 1'
NOTE: This won’t work right on filenames with colons in them. :)½
If you want to use perl, you could do something like
grep "xyz" | perl -p -e 's/(.*)/\1\n/g'
If you want to use sed (where I seem to have gotten better results), you could do something like
grep "xyz" | sed 's/.*/\0\n/g'
This prints a newline after every single line of grep output:
grep "xyz" | perl -pe 'print "\n"'
This prints a newline in between results from different files. (Answering the question as I read it.)
grep 'xyx' * | perl -pe '/(.*?):/; if ($f ne $1) {print "\n"; $f=$1}'
Use a state machine to determine when to print a blank line:
#!/usr/bin/env perl
use strict;
use warnings;
# state variable to determine when to print a blank line
my $prev_file = '';
# change DATA to the appropriate input file handle
while( my $line = <DATA> ){
# did the state change?
if( my ( $file ) = $line =~ m{ \A ([^:]*) \: .*? xyz }msx ){
# blank lines between states
print "\n" if $file ne $prev_file && length $prev_file;
# set the new state
$prev_file = $file;
}
# print every line
print $line;
}
__DATA__
file1:xyz
file2:xyz
file2:xyz2
file3:xyz
How can I write a Perl script to convert a text file to all upper case letters?
perl -ne "print uc" < input.txt
The -n wraps your command line script (which is supplied by -e) in a while loop. A uc returns the ALL-UPPERCASE version of the default variable $_, and what print does, well, you know it yourself. ;-)
The -p is just like -n, but it does a print in addition. Again, acting on the default variable $_.
To store that in a script file:
#!perl -n
print uc;
Call it like this:
perl uc.pl < in.txt > out.txt
$ perl -pe '$_= uc($_)' input.txt > output.txt
perl -pe '$_ = uc($_)' input.txt > output.txt
But then you don't even need Perl if you're using Linux (or *nix). Some other ways are:
awk:
awk '{ print toupper($0) }' input.txt >output.txt
tr:
tr '[:lower:]' '[:upper:]' < input.txt > output.txt
$ perl -Tpe " $_ = uc; " --
$ perl -MO=Deparse -Tpe " $_ = uc; " -- a s d f
LINE: while (defined($_ = <ARGV>)) {
$_ = uc $_;
}
continue {
die "-p destination: $!\n" unless print $_;
}
-e syntax OK
$ cat myprogram.pl
#!/usr/bin/perl -T --
LINE: while (defined($_ = <ARGV>)) {
$_ = uc $_;
}
continue {
die "-p destination: $!\n" unless print $_;
}
How could I convert:
awk '{print $2 >> $1}' file
in a short Perl one-liner?
"file" could look like this:
fruit banana
vegetable beetroot
vegetable carrot
mushroom chanterelle
fruit apple
there may some other ways, but here's what i can think of
perl -ane 'open(FILE,">>",$F[0]); print FILE $F[1];close(FILE);' file
I guess awk has to be better at some things :-)
This is right at the limit of what I'd do on the command line, but it avoids reopening filehandles.
$ perl -lane '$fh{$F[0]} || open $fh{$F[0]}, ">>", $F[0]; print {$fh{$F[0]}} $F[1]' file
Not pure Perl, but you can do:
perl -nae '`echo $F[1] >> $F[0]`' input_file
This is what a2p <<< '{print $2 >> $1}' produces
#!/usr/bin/perl
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if $running_under_some_shell;
# this emulates #! processing on NIH machines.
# (remove #! line above if indigestible)
eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift;
# process any FOO=bar switches
$, = ' '; # set output field separator
$\ = "\n"; # set output record separator
while (<>) {
($Fld1,$Fld2) = split(' ', $_, -1);
&Pick('>>', $Fld1) &&
(print $fh $Fld2);
}
sub Pick {
local($mode,$name,$pipe) = #_;
$fh = $name;
open($name,$mode.$name.$pipe) unless $opened{$name}++;
}
I try to create a short perl command that creates SQL inserts for my DB based on a text file. However, I am not able to get in the single-quotes used by SQL
perl -pe '$i += 1; chomp; #a = split /\t/; $_ = "INSERT INTO XYZ VALUES($i, \'$a[4]\');\n"'
results in a syntax error near unexpected token `)'
Any ideas?
perl -pe '$i += 1; chomp; #a = split /\t/; $_ = "INSERT INTO XYZ VALUES($i, \047$a[4]\047);\n";'
You need to escape them for the shell, not for Perl. This requires a slightly different syntax. Assuming you're running this under bash, ksh, or similar, then
perl -e 'print "greengrocer'\''s\n"'
should print greengrocer's.
Alternatively, you could insert the character as a hex escape sequence:
perl -e 'print "greengrocer\x27s\n"'
perl -pe "\$i += 1; chomp; #a = split /\t/; \$_ = \"INSERT INTO XYZ VALUES(\$i, '\$a[4]');\n\""
From this page it states for BASH: "A single quote may not occur between single quotes, even when preceded by a backslash.". So use double quotes instead and escape as necessary.
Use the technique #Porculus suggested in his answer:
perl -pe '$i += 1; chomp; #a = split /\t/; $_ = "INSERT INTO XYZ VALUES($i, '\''$a[4]'\'');\n";'
This closes the single-quoted string just before the single quote, then uses an escaped single-quote, then opens a new single-quoted string.
The beauty of this technique is that you can automate it:
sed -e "s/'/'\\\\''/g" -e "s/^/'/" -e "s/$/'/" <<'EOF'
$i += 1; chomp; #a = split /\t/; $_ = "INSERT INTO XYZ VALUES($i, '$a[4]');\n";
EOF
resulting a properly-quoted string you can paste on to the command line :
'$i += 1; chomp; #a = split /\t/; $_ = "INSERT INTO XYZ VALUES($i, '\''$a[4]'\'');\n";'
Or you can make a shell-script for it:
$ cat > quotify
#!/bin/sh
sed -e "s/'/'\\\\''/g" -e "s/^/'/" -e "s/$/'/"
^D
$ chmod +x quotify
$ ./quotify
$i += 1; chomp; #a = split /\t/; $_ = "INSERT INTO XYZ VALUES($i, '$a[4]');\n";
^D
'$i += 1; chomp; #a = split /\t/; $_ = "INSERT INTO XYZ VALUES($i, '\''$a[4]'\'');\n";'
(The above sed firstly replaces each ' with '\'' then puts a ' at the front and back.
Found a work-around by using a | for all single-quote and then in a second perl call replacing this | by a single quote (perl -pe "s/\|/'/g")
perl -pe '$i += 1; chomp; #a = split /\t/; $_ = "INSERT INTO XYZ VALUES($i, |$a[4]|, |$a[5]|, |$a[7]|, |$a[0]|, |$a[1]|, |$a[3]|);\n"' | perl -pe "s/\|/'/g"
Still interested in a better solution.