I'm trying to use a perl -npe one-liner to surround each line with =.
$ for i in {1..4}; { echo $i ;} |perl -npe '...'
=1=
=2=
=3=
=4=
The following is my first attempt. Note that the line feeds are in the incorrect position.
$ for i in {1..4}; { echo $i ;} |perl -npe '$_= "=".$_."=" '
=1
==2
==3
==4
=
I tried using chop to remove them line feeds and then re-add them in the correct position, but it didn't work.
$ for i in {1..4} ;{ echo $i ;} |perl -npe '$_= "=".chop($_)."=\n" '
=
=
=
=
=
=
=
=
Please solve it out, thanks much.
chop returned the removed character, not the remaining string. It modifies the variable in-place. So the following is the correct usage:
perl -npe'chop( $_ ); $_ = "=$_=\n"'
But we can improve this.
It's safer to use chomp instead of chop to remove trailing line feeds.
-n is implied by -p, and it's customary to leave it out when -p is used.
chomp and chop modify $_ by default, so we don't need to explicitly pass $_.
perl -pe'chomp; $_ = "=$_=\n"'
Finally, we can get the same exact behaviour out of -l.
perl -ple'$_ = "=$_="'
Related
Say I have this file:
cat > testfile.txt <<'EOF'
test1line
test23line
test456line
EOF
Now, I want to use perl in a "grep like" manner (a one-liner expression/command with file argument, which dumps output to terminal/stdout), such that I match all of the numbers in the above lines, and I make them into zero-padded three digit representation. So, I tried this to check if matching generally works:
perl -nE '/(.*test)(\d+)(line.*)/ && print "$1 - $2 -$3\n";' testfile.txt
#test - 1 -line
#test - 23 -line
#test - 456 -line
Well, that works; so now, I was thinking, I'll just call sprintf to format, assign that to a variable, and print that variable instead, and I'm done; unfortunately, my attempt there failed:
perl -nE '/(.*test)(\d+)(line.*)/ && $b = sprintf("%03d", $2); print "$1 - $b -$3\n";' testfile.txt
#Can't modify logical and (&&) in scalar assignment at -e line 1, near ");"
#Can't modify pattern match (m//) in scalar assignment at -e line 1, near ");"
#Execution of -e aborted due to compilation errors.
Ok, so something went wrong there. As far as I can tell from the error messages, apparently mixing logical AND (&&) and assignments like in the above one-liner do not work.
So, how can I have a Perl one-liner where a regex-condition is checked; and if match is detected, a series of commands are executed, which may involve one or more assignments, and conclude with a print?
EDIT: found an invocation that works:
perl -nE '/(.*test)(\d+)(line.*)/ && printf("$1%03d$3\n", $2);' testfile.txt
#test001line
#test023line
#test456line
... but I'd still like to know how to do the same via sprintf and assignment to variable.
Precedence issue.
/.../ && $b = ...;
means
( /.../ && $b ) = ...;
You could use
/.../ && ( $b = ... );
/.../ && do { $b = ...; };
/.../ and $b = ...;
$b = ... if /.../;
But there's a second problem. You call print unconditionally.
perl -ne'printf "%s-%03d-%s\n", $1, $2, $3 if /(.*test)(\d+)(line.*)/'
MSG_OUT="<B><I>Skipping<N> all libraries and fonts...<N>"
perl -ne '%ES=("B","[1m","I","[3m","N","[m","O","[9m","R","[7m","U","[4m"); while (<>) { s/(<([BINORSU])>)/\e$ES{$2}/g; print; }'
This perl one-liner swaps a token for an escape sequence.
It works as intended but only if the input is surrounded with line feeds.
i.e.
echo "\x0a${MSG_OUT}\x0a" | perl -ne '.... etc.
How do I avoid this issue when reading from stdin?
-n wraps your code in while (<>) { ... }* (cf perldoc perlrun). Thus, your one-liner is equivalent to:
perl -e '
while(<>) {
%ES=("B","[1m","I","[3m","N","[m","O","[9m","R","[7m","U","[4m");
while (<>) { s/(<([BINORSU])>)/\e$ES{$2}/g; print; }
}
'
[Line breaks added for readability. They can be removed if you so desire.]
See the double while (<>) { ... }? That's your issue: the first while (the one added by -n) reads a line, then the second while (the one you wrote) reads a second line, does your s/// (on the second line), and prints this second line updated. Thus, you need a blank line before the actual line you want to process.
To fix the issue, either remove the inner while(<>), or remove the -n flag. For instance:
perl -e '
%ES=("B","[1m","I","[3m","N","[m","O","[9m","R","[7m","U","[4m");
while (<>) { s/(<([BINORSU])>)/\e$ES{$2}/g; print; }
'
Or,
perl -ne '
BEGIN { %ES=("B","[1m","I","[3m","N","[m","O","[9m","R","[7m","U","[4m") };
s/(<([BINORSU])>)/\e$ES{$2}/g; print;
'
Note that instead of using -n and print, you can use -p, which is the same as -n with an extra print** at the end:
perl -pe '
BEGIN { %ES=("B","[1m","I","[3m","N","[m","O","[9m","R","[7m","U","[4m") };
s/(<([BINORSU])>)/\e$ES{$2}/g;
'
* For completness, note that -n adds the label LINE before the while loop (LINE: while(<>) { ... }), although that doesn't matter in your case.
** The print added by -p is actually in a continue block after the while, although, once again, this doesn't matter in your case.
If I run a script with perl -Fsomething, is that something value saved anywhere in the Perl environment where the script can find it? I'd like to write a script that by default reuses the input delimiter (if it's a string and not a regular expression) as the output delimiter.
Looking at the source, I don't think the delimiter is saved anywhere. When you run
perl -F, -an
the lexer actually generates the code
LINE: while (<>) {our #F=split(q\0,\0);
and parses it. At this point, any information about the delimiter is lost.
Your best option is to split by hand:
perl -ne'BEGIN { $F="," } #F=split(/$F/); print join($F, #F)' foo.csv
or to pass the delimiter as an argument to your script:
F=,; perl -F$F -sane'print join($F, #F)' -- -F=$F foo.csv
or to pass the delimiter as an environment variable:
export F=,; perl -F$F -ane'print join($ENV{F}, #F)' foo.csv
As #ThisSuitIsBlackNot says it looks like the delimiter is not saved anywhere.
This is how the perl.c stores the -F parameter
case 'F':
PL_minus_a = TRUE;
PL_minus_F = TRUE;
PL_minus_n = TRUE;
PL_splitstr = ++s;
while (*s && !isSPACE(*s)) ++s;
PL_splitstr = savepvn(PL_splitstr, s - PL_splitstr);
return s;
And then the lexer generates the code
LINE: while (<>) {our #F=split(q\0,\0);
However this is of course compiled, and if you run it with B::Deparse you can see what is stored.
$ perl -MO=Deparse -F/e/ -e ''
LINE: while (defined($_ = <ARGV>)) {
our(#F) = split(/e/, $_, 0);
}
-e syntax OK
Being perl there is always a way, however ugly. (And this is some of the ugliest code I have written in a while):
use B::Deparse;
use Capture::Tiny qw/capture_stdout/;
BEGIN {
my $f_var;
}
unless ($f_var) {
$stdout = capture_stdout {
my $sub = B::Deparse::compile();
&{$sub}; # Have to capture stdout, since I won't bother to setup compile to return the text, instead of printing
};
my (undef, $split_line, undef) = split(/\n/, $stdout, 3);
($f_var) = $split_line =~ /our\(\#F\) = split\((.*)\, \$\_\, 0\);/;
print $f_var,"\n";
}
Output:
$ perl -Fe/\\\(\\[\\\<\\{\"e testy.pl
m#e/\(\[\<\{"e#
You could possible traverse the bytecode instead, since the start probably will be identical every time until you reach the pattern.
I have several AIX systems with a configuration file, let's call it /etc/bar/config. The file may or may not have a line declaring values for foo. An example would be:
foo = A_1,GROUP_1,USER_1,USER_2,USER_3
The foo line may or may not be the same on all systems. Different systems may have different values and different a different number of values. My task is to add "bare minimum" values in the config file on all systems. The bare minimum line will look like this.
foo = A_1,USER_1,SYS_1,SYS_2
If the line does not exist, I must create it. If the line does exist, I must merge the two lines. Using my examples, the result would be this. The order of the values does not matter.
foo = A_1,GROUP_1,USER_1,USER_3,USER_2,SYS_1,SYS_2
Obviously I want a script to do my work. I have the standard sh, ksh, awk, sed, grep, perl, cut, etc. Since this is AIX, I do not have access to the GNU versions of these utilities.
Originally, I had a script with these commands to replace the entire foo line.
cp /etc/bar/config /etc/bar/config.$$
sed "s/foo = .*/foo = A_1,USER_1,SYS_1,SYS_2/" /etc/bar/config.$$ > /etc/bar/config
But this simply replaces the line. It does take into consideration any pre-existing configuration, including a line that's missing. And I'm doing other configuration modifications in the script, such as adding completely unique lines to other files and restarting a process, so I'd perfer this be some type of shell-based code snippet I can add to my change script. I am open to other options, especially if the solution is simpler.
Some dirty bash/sed:
#!/usr/bin/bash
input_file="some_filename"
v=$(grep -n '^foo *=' "$input_file")
lineno=$(cut -d: -f1 <<< "${v}0:")
base="A_1,USER_1,SYS_1,SYS_2,"
if [[ "$lineno" == 0 ]]; then
echo "foo = A_1,USER_1,SYS_1,SYS_2" >> "$input_file"
else
all=$(sed -n ${lineno}'s/^foo *= */'"$base"'/p' "$input_file" | \
tr ',' '\n' | sort | uniq | tr '\n' ',' | \
sed -e 's/^/foo = /' -e 's/, *$//' -e 's/ */ /g' <<< "$all")
sed -i "${lineno}"'s/.*/'"$all"'/' "$input_file"
fi
Untested bash, etc.
config=/etc/bar/config
default=A_1,USER_1,SYS_1,SYS_2
pattern='^foo[[:blank:]]*=[[:blank:]]*' # shared with grep and sed
if current=$( grep "$pattern" "$config" | sed "s/$pattern//" )
then
new=$( echo "$current,$default" | tr ',' '\n' | sort | uniq | paste -sd, )
sed "s/$pattern.*/foo = $new/" "$config" > "$config.$$.tmp" &&
mv "$config.$$.tmp" "$config"
else
echo "foo = $default" >> "$config"
fi
A vanilla perl solution:
perl -i -lpe '
BEGIN {%foo = map {$_ => 1} qw/A_1 USER_1 SYS_1 SYS_2/}
if (s/^foo\s*=\s*//) {
$found=1;
$foo{$_}=1 for split /,/;
$_ = "foo = " . join(",", keys %foo);
}
END {print "foo = " . join(",", keys %foo) unless $found}
' /etc/bar/config
This Perl code will do as you ask. It expects the path to the file to be modified as a parameter on the command line.
Note that it reads the entire input file into the array #config and then overwrites the same file with the modified data.
It works by building a hash %values from a combination of the items already present in the foo = line and the list of defaults items in #defaults. The combination is sorted in alphabetical order and joined eith a comma
use strict;
use warnings;
my #defaults = qw/ A_1 USER_1 SYS_1 SYS_2 /;
my ($file) = #ARGV;
my #config = <>;
open my $out_fh, '>', $file or die $!;
select $out_fh;
for ( #config ) {
if ( my ($pfx, $vals) = /^(foo \s* = \s* ) (.+) /x ) {
my %values;
++$values{$_} for $vals =~ /[^,\s]+/g;
++$values{$_} for #defaults;
print $pfx, join(',', sort keys %values), "\n";
}
else {
print;
}
}
close $out_fh;
output
foo = A_1,GROUP_1,SYS_1,SYS_2,USER_1,USER_2,USER_3
Since you didn't provide sample input and expected output I couldn't test this but this is the right approach:
awk '
/foo = / { old = ","$3; next }
{ print }
END {
split("A_1,USER_1,SYS_1,SYS_2"old,all,/,/)
for (i in all)
if (!seen[all[i]]++)
new = (new ? new "," : "") all[i]
print "foo =", new
}
' /etc/bar/config > tmp && mv tmp /etc/bar/config
I'm writing a Perl script to read a log so that to re-write the file into a new log by removing empty lines in case of seeing any consecutive blank lines of 4 or more. In other words, I'll have to compress any 4 consecutive blank lines (or more lines) into one single line; but any case of 1, 2 or 3 lines in the file will have to remain the format. I have tried to get the solution online but the only I can find is
perl -00 -pe ''
or
perl -00pe0
Also, I see the example in vim like this to delete blocks of 4 empty lines :%s/^\n\{4}// which match what I'm looking for but it was in vim not Perl. Can anyone help in this? Thanks.
To collapse 4+ consecutive Unix-style EOLs to a single newline:
$ perl -0777 -pi.bak -e 's|\n{4,}|\n|g' file.txt
An alternative flavor using look-behind:
$ perl -0777 -pi.bak -e 's|(?<=\n)\n{3,}||g' file.txt
use strict;
use warnings;
my $cnt = 0;
sub flush_ws {
$cnt = 1 if ($cnt >= 4);
while ($cnt > 0) {print "\n"; $cnt--; }
}
while (<>) {
if (/^$/) {
$cnt++;
} else {
flush_ws();
print $_;
}
}
flush_ws();
Your -0 hint is a good one since you can use -0777 to slurp the whole file in -p mode. Read more about these guys in perlrun So this oneliner should do the trick:
$ perl -0777 -pe 's/\n{5,}/\n\n/g'
If there are up to four new lines in a row, nothing happens. Five newlines or more (four empty lines or more) are replaced by two newlines (one empty line). Note the /g switch here to replace not only the first match.
Deparsed code:
BEGIN { $/ = undef; $\ = undef; }
LINE: while (defined($_ = <ARGV>)) {
s/\n{5,}/\n\n/g;
}
continue {
die "-p destination: $!\n" unless print $_;
}
HTH! :)
One way using GNU awk, setting the record separator to NUL:
awk 'BEGIN { RS="\0" } { gsub(/\n{5,}/,"\n")}1' file.txt
This assumes that you're definition of empty excludes whitespace
This will do what you need
perl -ne 'if (/\S/) {$n = 1 if $n >= 4; print "\n" x $n, $_; $n = 0} else {$n++}' myfile