I am trying to comprehend Perl following the way describe in the book "Minimal Perl".
I've uploaded all source txt files onto my own server : results folder
I got the output from using several bash commands in a "chain" like this:
cat run*.txt | grep '^Bank[[:space:]]Balance'|cut -d ':' -f2 | grep -E '\$[0-9]+'
I know this is far from the most concise and efficient, but at least it works...
As our uni subject now moves onto the Perl part, I'd like to know if there is a way to get the same results in one line?
I am trying something like the following code but stuck in the middle:
Chenxi Mao#chenxi-a6b123bb /cygdrive/c/eMarket/output
$ perl -wlne 'print; if $n=~/^Bank Balance/'
syntax error at -e line 1, near "if $n"
Execution of -e aborted due to compilation errors.
you shouldn't have a ; after the print. So
perl -wlne 'print $1 if $n=~/^Bank Balance\s*:\s*(\d+)/'
perl -F/\:/ -ane 'print $F[1]."\n" if /Bank Balance/ && $F[1]!~/\$-/' run*.txt
also here's a short version of your bash command, using just awk
awk -F": " '/Bank[ \t]*Balance/&& $2!~/\$-/{print $2}' run*.txt
Related
I have a text file (input.txt) like this:
NP_414685.4: 15-26, 131-138, 441-465
NP_418580.2: 493-500
NP_418780.2: 36-48, 44-66
NP_418345.2:
NP_418473.3: 1-19, 567-1093
NP_418398.2:
I want a perl one-liner that keeps only those lines in file where ":" is followed by number range (that means, here, the lines containing "NP_418345.2:" and "NP_418398.2:" get deleted). For this I have tried:
perl -ni -e "print unless /: \d/" -pi.bak input.txt del input.txt.bak
But it shows exactly same output as the input file.
What will be the exact pattern that I can match here?
Thanks
First, print unless means print if not -- opposite to what you want.
More to the point, it doesn't make sense using both -n and -p, and when you do -p overrides the other. While both of them open the input file(s) and set up the loop over lines, -p also prints $_ for every iteration. So with it you are reprinting every line. See perlrun.
Finally, you seem to be deleting the .bak file ... ? Then don't make it. Use just -i
Altogether
perl -i -ne 'print if /:\s*\d+\s*-\s*\d+/' input.txt
If you do want to keep the backup file use -i.bak instead of -i
You can see the code equivalent to a one-liner with particular options with B::Deparse (via O module)
Try: perl -MO=Deparse -ne 1 and perl -MO=Deparse -pe 1
This way:
perl -i.bak -ne 'print if /:\s+\d+-\d/' input.txt
This:
perl -ne 'print if /:\s*(\d+\s*-\s*\d+\s*,?\s*)+\s*$/' input.txt
Prints:
NP_414685.4: 15-26, 131-138, 441-465
NP_418580.2: 493-500
NP_418780.2: 36-48, 44-66
NP_418473.3: 1-19, 567-1093
I'm not sure if you want to match lines that are possibly like this:
NP_418580.2: 493-500, asdf
or this:
NP_418580.2: asdf
This answer will not print these lines, if given to it.
I am a beginner with Perl. I am using the Below Perl Command To Search and Replace "/$" Sequence in my tcl Script. This works well When used on the linux Command Line directly.
perl -p -i -e 's/\/\$/\/\\\$/g' sed_list.tcl
I am calling to Call the above Perl One liner in another Perl script using System Command and only with " ` " Back Tick.
system(`perl -p -i -e 's/\/\$/\/\\\$/g' sed_list.tcl`);
`perl -p -i -e 's/\/\$/\/\\\$/g' sed_list.tcl`;
I am getting the Below error. Please Help With this issue.
Bareword found where operator expected at -e line 1, near "$/g"
(Missing operator before g?)
Final $ should be \$ or $name at -e line 1, within string
syntax error at -e line 1, near "s//$/"
Execution of -e aborted due to compilation errors.
I Dont Know if I Can use any other Separation Operator like % and # just like SED command but, When I used '%' operator for separation, I didn't see error but job is not done.
`perl -p -i -e 's%\/\$%\/\\\$%g' sed_list.tcl`;
I couldn't find sufficient results for this particular issue of '$' variable on the web. Any help is appreciated.
Some one here Suggested that I should Escape all Back Slashes while using System Command or calling another command using BackTicks from inside a perl script. But later they have deleted their answer. It worked for me. I would like to thank every one for taking effort and helping me out in solving my question.
Here is the correct working code.
`perl -p -i -e 's/\\\/\\\$/\\\/\\\\\\\$/g' sed_list_gen.tcl`;
or Use System function as shown Below
system("perl -p -i -e 's/\\\/\\\$/\\\/\\\\\\\$/g' sed_list_gen.tcl");
Thanks once again for the community for helping me out. . .
You can execute an external command by passing the command to a system function or by using backticks(``) operator. Please pass the command to the system() function as a string:
system(q{perl -p -i -e 's/\/\$/\/\\\$/g' sed_list.tcl})
or use backticks as:
`perl -p -i -e 's/\/\$/\/\\\$/g' sed_list_gen.tcl`
Edit:
As suggested by Paul in the comments.
I have been using Linux env and recently migrated to solaris. Unfortunately one of my bash scripts requires the use of grep with the P switch [ pcre support ] .As Solaris doesnt support the pcre option for grep , I am obliged to find another solution to the problem.And pcregrep seems to have an obvious loop bug and sed -r option is unsupported !
I hope that using perl or nawk will solve the problem on solaris.
I have not yet used perl in my script and am unware neither of its syntax nor the flags.
Since it is pcre , I beleive that a perl scripter can help me out in a matter of minutes. They should match over multiple lines .
Which one would be a better solution in terms of efficiency the awk or the perl solution ?
Thanks for the replies .
These are some grep to perl conversions you might need:
grep -P PATTERN FILE(s) ---> perl -nle 'print if m/PATTERN/' FILE(s)
grep -Po PATTERN FILE(s) ---> perl -nle 'print "$1\n" while m/(PATTERN)/g' FILE(s)
That's my guess as to what you're looking for, if grep -P is out of the question.
Here's a shorty:
grep -P /regex/ ====> perl -ne 'print if /regex/;'
The -n takes each line of the file as input. Each line is put into a special perl variable called $_ as Perl loops through the whole file.
The -e says the Perl program is on the command line instead of passing it a file.
The Perl print command automatically prints out whatever is in $_ if you don't specify for it to print out anything else.
The if /regex/ matches the regular expression against whatever line of your file is in the $_ variable.
I am using Perl to search and replace multiple regular expressions:
When I execute the following command, I get an error:
prompt> find "*.cpp" | xargs perl -i -pe 's/##(\W)/\1/g' -pe 's/(\W)##/\1/g'
syntax error at -e line 2, near "s/(\W)##/\1/g"
Execution of -e aborted due to compilation errors.
xargs: perl: exited with status 255; aborting
Having multiple -e is valid in Perl, then why is this not working? Is there a solution to this?
Several -e's are allowed.
You are missing the ';'
find "*.cpp" | xargs perl -i -pe 's/##(\W)/\1/g;' -pe 's/(\W)##/\1/g;'
Perl statements has to end with ;.
Final statement in a block doesn't need a terminating semicolon.
So a single -e without ; will work, but you will have to add ; when you have multiple -e statements.
Having multiple -e values are valid, but is it useful? The values from the multiple -e are merely combined into one program, and it's up to you to ensure that together they make a syntactically correct program. The B::Deparse program can show you what perl thinks the program is:
$ perl -MO=Deparse -e 'print' -e 'q(Hello' -e ')'
print "Hello\n";
-e syntax OK
A curious thing to note is that a newline snuck in there. Think about how it got there to see what else perl is doing to combine multiple -e values.
In your program, you are substituting on the current line, then taking the modified line and substituting again. That's better written as:
prompt> find "*.cpp" | xargs perl -i -pe 's/##(\W)/\1/g; s/(\W)##/\1/g'
Now, if you are building up this command line by adding more and more -e through some automated process and you don't know ahead of time what you get, maybe those -e make sense. However, you might consider that you can do the same thing to build up the string you give to -e. I don't know what might be better because you didn't explain why you are doing it that way.
But, I suspect that in some cases, people are actually thinking about having only one substitution work. They want to try one and if its pattern doesn't work, try a different one until one succeeds. In that case you don't want to separate the substitutions by semicolons. Use the short-circuiting || instead. The s/// returns the number of substitutions it made and || will stop (short circuit) when it finds a true value:
prompt> find "*.cpp" | xargs perl -i -pe 's/##(\W)/\1/g || s/(\W)##/\1/g'
And note, you only need one -p. It only does its job once. Here's the program with multiple -p deparsed:
$ perl -MO=Deparse -i -pe 's/##(\W)/\1/g;' -pe 's/(\W)##/\1/g;'
BEGIN { $^I = ""; }
LINE: while (defined($_ = readline ARGV)) {
s/##(\W)/$1/g;
s/(\W)##/$1/g;
}
continue {
die "-p destination: $!\n" unless print $_;
}
-e syntax OK
It's the same thing as having only one -p:
$ perl -MO=Deparse -pi -e 's/##(\W)/\1/g;' -e 's/(\W)##/\1/g;'
BEGIN { $^I = ""; }
LINE: while (defined($_ = readline ARGV)) {
s/##(\W)/$1/g;
s/(\W)##/$1/g;
}
continue {
die "-p destination: $!\n" unless print $_;
}
-e syntax OK
Thanks so much! You helped me reduce my ascii / decimal / 8-bit binary table printer enough to fit in a tweet:
for i in {32..126};do printf "'\x$(printf %x $i)'(%3i) = " $i; printf '%03o\n' $i | perl \
-pe 's#0#000#g;' -pe 's#1#001#g;' -pe 's#2#010#g;' -pe 's#3#011#g;' \
-pe 's#4#100#g;' -pe 's#5#101#g;' -pe 's#6#110#g;' -pe 's#7#111#g' ; done | \
perl -pe 's#= 0#= #'
I have a file that contains this kind of paths:
C:\bad\foo.c
C:\good\foo.c
C:\good\bar\foo.c
C:\good\bar\[variable subdir count]\foo.c
And I would like to get the following file:
C:\bad\foo.c
C:/good/foo.c
C:/good/bar/foo.c
C:/good/bar/[variable subdir count]/foo.c
Note that the non matching path should not be modified.
I know how to do this with sed for a fixed number of subdir, but a variable number is giving me trouble. Actually, I would have to use many s/x/y/ expressions (as many as the max depth... not very elegant).
May be with awk, but this kind of magic is beyond my skills.
FYI, I need this trick to correct some gcov binary files on a cygwin platform.
I am dealing with binary files; therefore, I might have the following kind of data:
bindata\bindata%bindataC:\good\foo.c
which should be translated as:
bindata\bindata%bindataC:/good/foo.c
The first \ must not be translated, despite that it is on the same line.
However, I have just checked my .gcno files while editing this text and it looks like all the paths are flanked with zeros, so most of the answers below should fit.
sed -e '/^C:\\good/ s/\\/\//g' input_file.txt
I would recommend you look into the cygpath utility, which converts path names from one format to another. For instance on my machine:
$ cygpath `pwd`
/home/jericson
$ cygpath -w `pwd`
D:\root\home\jericson
$ cygpath -m `pwd`
D:/root/home/jericson
Here's a Perl implementation of what you asked for:
$ echo 'C:\bad\foo.c
C:\good\foo.c
C:\good\bar\foo.c
C:\good\bar\[variable subdir count]\foo.c' | perl -pe 's|\\|/|g if /good/'
C:\bad\foo.c
C:/good/foo.c
C:/good/bar/foo.c
C:/good/bar/[variable subdir count]/foo.c
It works directly with the string, so it will work anywhere. You could combine it with cygpath, but it only works on machines that have that path:
perl -pe '$_ = `cygpath -m $_` if /good/'
(Since I don't have C:\good on my machine, I get output like C:goodfoo.c. If you use a real path on your machine, it ought to work correctly.)
You want to substitute '/' for all '\' but only on the lines that match the good directory path. Both sed and awk will let you do this by having a LHS (matching) expression that only picks the lines with the right path.
A trivial sed script to do this would look like:
/[Cc]:\\good/ s/\\/\//g
For a file:
c:\bad\foo
c:\bad\foo\bar
c:\good\foo
c:\good\foo\bar
You will get the output below:
c:\bad\foo
c:\bad\foo\bar
c:/good/foo
c:/good/foo/bar
Here's how I would do it in awk:
# fixpaths.awk
/C:\\good/ {
gsub(/\\/,"/",$1);
print $1 >> outfile;
}
Then run it using the command:
awk -f fixpaths.awk paths.txt; mv outfile paths.txt
Or with some help from good ol' Bash:
#!/bin/bash
cat file | while read LINE
do
if <bad_condition>
then
echo "$LINE" >> newfile
else
echo "$LINE" | sed -e "s/\\/\//g" >> newfile
fi
done
try this
sed -re '/\\good\\/ s/\\/\//g' temp.txt
or this
awk -F"\\" '{if($2=="good"){OFS="\/"; $1=$1;} print $0}' temp.txt