perl query using -pie - perl

This works:
perl -pi -e 's/abc/cba/g' hellofile
But this does not:
perl -pie 's/cba/abc/g' hellofile
In other words -pi -e works but -pie does not. Why?

The -i flag takes an optional argument (which, if present, must be immediately after it, not in a separate command-line argument) that specifies the suffix to append to the name of the input file for the purposes of creating a backup. Writing perl -pie 's/cba/abc/g' hellofile causes the e to be taken as this suffix, and as the e isn't interpreted as the normal -e option, Perl tries to run the script located in s/cba/abc/g, which probably doesn't exist.

Because -i takes an optional extension for backup files, e.g. -i.bak, and therefore additional flags cannot follow directly after -i.
From perldoc perlrun
-i[extension]
specifies that files processed by the <> construct are to be edited
in-place. It does this by renaming the input file, opening the output
file by the original name, and selecting that output file as the
default for print() statements. The extension, if supplied, is used to
modify the name of the old file to make a backup copy, following these
rules:
If no extension is supplied, no backup is made and the current file is
overwritten.
If the extension doesn't contain a * , then it is appended to the end
of the current filename as a suffix. If the extension does contain one
or more * characters, then each * is replaced with the current
filename. In Perl terms, you could think of this as:

perl already tells you why :) Try-It-To-See
$ perl -pie " s/abc/cba/g " NUL
Can't open perl script " s/abc/cba/g ": No such file or directory
If you use B::Deparse you can see how perl compiles your code
$ perl -MO=Deparse -pi -e " s/abc/cba/g " NUL
BEGIN { $^I = ""; }
LINE: while (defined($_ = <ARGV>)) {
s/abc/cba/g;
}
continue {
die "-p destination: $!\n" unless print $_;
}
-e syntax OK
If you lookup $^I in perlvar you can learn about the -i switch :)
$ perldoc -v "$^I"
$INPLACE_EDIT
$^I The current value of the inplace-edit extension. Use "undef" to
disable inplace editing.
Mnemonic: value of -i switch.
Now if we revisit the first part, add an extra -e, then add Deparse, the -i switch is explained
$ perl -pie -e " s/abc/cba/g " NUL
Can't do inplace edit: NUL is not a regular file.
$ perl -MO=Deparse -pie -e " s/abc/cba/g " NUL
BEGIN { $^I = "e"; }
LINE: while (defined($_ = <ARGV>)) {
s/abc/cba/g;
}
continue {
die "-p destination: $!\n" unless print $_;
}
-e syntax OK
Could it really be that e in -pie is taken as extension? I guess so
$ perl -MO=Deparse -pilogicus -e " s/abc/cba/g " NUL
BEGIN { $^I = "logicus"; }
LINE: while (defined($_ = <ARGV>)) {
s/abc/cba/g;
}
continue {
die "-p destination: $!\n" unless print $_;
}
-e syntax OK
When in doubt, Deparse or Deparse,-p

Related

Patch binary with "perl -pi -e" doesn't work as expected in mac os mojave

I want to patch a binary file with perl. The command doesn't work today, but in the past I used it a lot .
The command below doesn't work on Mac Os X:
perl -pi -e 's|\xA0\x37\x96\x30\xDE\x90|\xA7\x70\x92\x30\xD5\x9B|' /file.bin
If I use
perl -MO=Deparse -pi -e 's|\xA0\x37\x96\x30\xDE\x90|\xA7\x70\x92\x30\xD5\x9B|' /file.bin
the result is:
BEGIN { $^I = ""; }
LINE: while (defined($_ = <ARGV>)) {
s/\xA0\x37\x96\x30\xDE\x90/\247p\2220\325\233/;
}
continue {
die "-p destination: $!\n" unless print $_;
}
-e syntax OK
Why the replace section was modified like this?
I checked 1000 time the syntax is correct; why it doesn't work as expected?
It wasn't modified.
"\xA7\x70\x92\x30\xD5\x9B"
and
"\247p\2220\325\233"
are equivalent.
$ perl -e'CORE::say "\xA7\x70\x92\x30\xD5\x9B" eq "\247p\2220\325\233" ? "same" : "diff"'
same
2478 = A716
p's ASCII encoding is 7016
2228 = 9216
0's ASCII encoding is 3016
3258 = D516
2338 = 9B16

Perl script throws syntax error for awk command

I have a file which contains each users userid and password. I need to fetch userid and password from that file by passing userid as an search element using awk command.
user101,smith,smith#123
user102,jones,passj#007
user103,albert,albpass#01
I am using a awk command inside my perl script like this:
...
...
my $userid = ARGV[0];
my $user_report_file = "report_file.txt";
my $data = `awk -F, '$1 ~ /$userid/ {print $2, $3}' $user_report_file`;
my ($user,$pw) = split(" ",$data);
...
...
Here I am getting the error:
awk: ~ /user101/ {print , }
awk: ^ syntax error
But if I run same command in terminal window its able to give result like below:
$] awk -F, '$1 ~ /user101/ {print $2, $3}' report_file.txt
smith smith#123
What could be the issue here?
The backticks are a double-quoted context, so you need to escape any literal $ that you want awk to interpret.
my $data = `awk -F, '\$1 ~ /$userid/ {print \$2, \$3}' $user_report_file`;
If you don't do that, you're interpolating the capture variables from the last successful Perl match.
When I have these sorts of problems, I try the command as a string first to see if it is what I expect:
my $data = "awk -F, '\$1 ~ /$userid/ {print \$2, \$3}' $user_report_file";
say $data;
Here's the Perl equivalent of that command:
$ perl -aF, -e '$F[0]=~/101/ && print "#F[1,2]"' report_file
But, this is something you probably want to do in Perl instead of creating another process:
Interpolating data into external commands can go wrong, such as a filename that is foo.txt; rm -rf /.
The awk you run is the first one in the path, so someone can make that a completely different program (so use the full path, like /usr/bin/awk).
Taint checking can tell you when you are passing unsanitized data to the shell.
Inside a program you don't get all the shortcuts, but if this is the part of your program that is slow, you probably want to rethink how you are accessing this data because scanning the entire file with any tool isn't going to be that fast:
open my $fh, '<', $user_report_file or die;
while( <$fh> ) {
chomp;
my #F = split /,/;
next unless $F[0] =~ /\Q$userid/;
print "#F[1,2]";
last; # if you only want the first one
}

perl backticks: use bash instead of sh

I noticed that when I use backticks in perl the commands are executed using sh, not bash, giving me some problems.
How can I change that behavior so perl will use bash?
PS. The command that I'm trying to run is:
paste filename <(cut -d \" \" -f 2 filename2 | grep -v mean) >> filename3
The "system shell" is not generally mutable. See perldoc -f exec:
If there is more than one argument in LIST, or if LIST is an array with more than one value, calls execvp(3) with the arguments in LIST. If
there is only one scalar argument or an array with one element in it, the argument is checked for shell metacharacters, and if there are any, the
entire argument is passed to the system's command shell for parsing (this is "/bin/sh -c" on Unix platforms, but varies on other platforms).
If you really need bash to perform a particular task, consider calling it explicitly:
my $result = `/usr/bin/bash command arguments`;
or even:
open my $bash_handle, '| /usr/bin/bash' or die "Cannot open bash: $!";
print $bash_handle 'command arguments';
You could also put your bash commands into a .sh file and invoke that directly:
my $result = `/usr/bin/bash script.pl`;
Try
`bash -c \"your command with args\"`
I am fairly sure the argument of -c is interpreted the way bash interprets its command line. The trick is to protect it from sh - that's what quotes are for.
This example works for me:
$ perl -e 'print `/bin/bash -c "echo <(pwd)"`'
/dev/fd/63
To deal with running bash and nested quotes, this article provides the best solution: How can I use bash syntax in Perl's system()?
my #args = ( "bash", "-c", "diff <(ls -l) <(ls -al)" );
system(#args);
I thought perl would honor the $SHELL variable, but then it occurred to me that its behavior might actually depend on your system's exec implementation. In mine, it seems that exec
will execute the shell
(/bin/sh) with the path of the
file as its first argument.
You can always do qw/bash your-command/, no?
Create a perl subroutine:
sub bash { return `cat << 'EOF' | /bin/bash\n$_[0]\nEOF\n`; }
And use it like below:
my $bash_cmd = 'paste filename <(cut -d " " -f 2 filename2 | grep -v mean) >> filename3';
print &bash($bash_cmd);
Or use perl here-doc for multi-line commands:
$bash_cmd = <<'EOF';
for (( i = 0; i < 10; i++ )); do
echo "${i}"
done
EOF
print &bash($bash_cmd);
I like to make some function btck (which integrates error checking) and bash_btck (which uses bash):
use Carp;
sub btck ($)
{
# Like backticks but the error check and chomp() are integrated
my $cmd = shift;
my $result = `$cmd`;
$? == 0 or confess "backtick command '$cmd' returned non-zero";
chomp($result);
return $result;
}
sub bash_btck ($)
{
# Like backticks but use bash and the error check and chomp() are
# integrated
my $cmd = shift;
my $sqpc = $cmd; # Single-Quote-Protected Command
$sqpc =~ s/'/'"'"'/g;
my $bc = "bash -c '$sqpc'";
return btck($bc);
}
One of the reasons I like to use bash is for safe pipe behavior:
sub safe_btck ($)
{
return bash_btck('set -o pipefail && '.shift);
}

How to put 'perl -pne' functionality in a perl script

So at the command line I can conveniently do something like this:
perl -pne 's/from/to/' in > out
And if I need to repeat this and/or I have several other perl -pne transformations, I can put them in, say, a .bat file in Windows. That's a rather roundabout way of doing it, of course. I should just write one perl script that has all those regex transformations.
So how do you write it? If I have a shell script containing these lines:
perl -pne 's/from1/to1/' in > temp
perl -pne 's/from2/to2/' -i temp
perl -pne 's/from3/to3/' -i temp
perl -pne 's/from4/to4/' -i temp
perl -pne 's/from5/to5/' temp > out
How can I just put these all into one perl script?
-e accepts arbitrary complex program. So just join your substitution operations.
perl -pe 's/from1/to1/; s/from2/to2/; s/from3/to3/; s/from4/to4/; s/from5/to5/' in > out
If you really want a Perl program that handles input and looping explicitely, deparse the one-liner to see the generated code and work from here.
> perl -MO=Deparse -pe 's/from1/to1/; s/from2/to2/; s/from3/to3/; s/from4/to4/; s/from5/to5/'
LINE: while (defined($_ = <ARGV>)) {
s/from1/to1/;
s/from2/to2/;
s/from3/to3/;
s/from4/to4/;
s/from5/to5/;
}
continue {
print $_;
}
-e syntax OK
Related answer to the question you didn't quite ask: the perl special variable $^I, used together with #ARGV, gives the in-place editing behavior of -i. As with the -p option, Deparse will show the generated code:
perl -MO=Deparse -pi.bak -le 's/foo/bar/'
BEGIN { $^I = ".bak"; }
BEGIN { $/ = "\n"; $\ = "\n"; }
LINE: while (defined($_ = <ARGV>)) {
chomp $_;
s/foo/bar/;
}
continue {
print $_;
}

editing text files with perl

I'm trying to edit a text file that looks like this:
TYPE=Ethernet
HWADDR=00:....
IPV6INIT=no
MTU=1500
IPADDR=192.168.2.247
...
(Its actually the /etc/sysconfig/network-scripts/ifcfg- file on red hat Linux)
Instead of reading and rewriting the file each time I want to modify it, I figured I could use grep, sed, awk or the native text parsing functionality provided in Perl.
For instance, if I wanted to change the IPADDR field of the file, is there a way I can just retrieve and modify the line directly? Maybe something like
grep 'IPADDR=' <filename>
but add some additional arguments to modify that line? I'm a little new to UNIX based text processing languages so bear with me...
Thanks!
Here's a Perl oneliner to replace the IPADDR value with the IP address 127.0.01. It's short enough that you should be able to see what you need to modify to alter other fields*:
perl -p -i.orig -e 's/^IPADDR=.*$/IPADDR=127.0.0.1/' filename
It will rename "filename" to "filename.orig", and write out the new version of the file into "filename".
Perl command-line options are explained at perldoc perlrun (thanks for the reminder toolic!), and the syntax of perl regular expressions is at perldoc perlre.
*The regular expression ^IPADDR=.*$, split into components, means:
^ # bind to the beginning of the line
IPADDR= # plain text: match "IPADDR="
.* # followed by any number of any character (`.` means "any one character"; `*` means "any number of them")
$ # bind to the end of the line
since you are on redhat, you can try using the shell
#!/bin/bash
file="file"
read -p "Enter field to change: " field
read -p "Enter new value: " newvalue
shopt -s nocasematch
while IFS="=" read -r f v
do
case "$f" in
$field)
v=$newvalue;;
esac
echo "$f=$v"
done <$file > temp
mv temp file
UPDATE:
file="file"
read -p "Enter field to change: " field
read -p "Enter new value: " newvalue
shopt -s nocasematch
EOL=false
IFS="="
until $EOL
do
read -r f v || EOL=true
case "$f" in
$field)
v=$newvalue;;
esac
echo "$f=$v"
done <$file #> temp
#mv temp file
OR , using just awk
awk 'BEGIN{
printf "Enter field to change: "
getline field < "-"
printf "Enter new value: "
getline newvalue <"-"
IGNORECASE=1
OFS=FS="="
}
field == $1{
$2=newvalue
}
{
print $0 > "temp"
}END{
cmd="mv temp "FILENAME
system(cmd)
}' file
Or with Perl
printf "Enter field: ";
chomp($field=<STDIN>);
printf "Enter new value: ";
chomp($newvalue=<STDIN>);
while (<>){
my ( $f , $v ) = split /=/;
if ( $field =~ /^$f/i){
$v=$newvalue;
}
print join("=",$f,$v);
}
That would be the 'ed' command line editor, like sed but will put the file back where it came from.