search and select decimal numbers in a text file line - perl

I have xml textfiles which contain lines of multiple numbers (3) separated by tabs/spaces, from which I would like to select the each set of numbers separately.
From:
<tagname1> 110.0912 99.1234 55.1326 </tagname1>
Result:
110.0912
and:
99.1234
and:
55.1326
I would like to use sed, awk, grep, etc. perl is fine too. Seems simple, but can't figure out a cleaner line. I've tried:
more FILENAME | grep tagname1 | grep -E -o "[0-9]+*\.[0-9]+" | head -n 1

perl -MRegexp::Common -nE 's/<.*?>//g; say for /($RE{num}{real})/g' file

You can use grep -o option.
$ cat file
<tagname1> 110.0912 99.1234 55.1326 </tagname1>
$ grep -oE '\b[0-9.]+\b' file
110.0912
99.1234
55.1326
\b defines a word boundary
[0-9.]+ is a character class suggesting match numbers and . one or more times
-o option prints matched pattern only

awk -v which=2 '/<tagname1>(([0-9]*(\.[0-9]*)?)|[ \t])*<\/tagname1>/ {print $(which+1)}' input.txt
Select which number you want to be printed using the variable which in this example it will print the second number which=2
input.txt:
<tagname1> 110.0912 99.1234 55.1326 </tagname1>

You can use awk
awk '{print $2,$3,$4}' OFS="\n" file
110.0912
99.1234
55.1326

$ cat file
<tagname1> 110.0912 99.1234 55.1326 </tagname1>
$ awk -v tag="tagname1" -v nr=1 '$0~"<"tag">"{print $(nr+1)}' file
110.0912
$ awk -v tag="tagname1" -v nr=2 '$0~"<"tag">"{print $(nr+1)}' file
99.1234
$ awk -v tag="tagname1" -v nr=3 '$0~"<"tag">"{print $(nr+1)}' file
55.1326

Related

Removing a specific line in bash with an exact string

I'm having trouble in getting sed to remove just the specific line I want. Let's say I have a file that looks like this:
testfile
testfile.txt
testfile2
Currently I'm using this to remove the line I want:
sed -i "/$1/d" file
The issue is that with this if I were to give testfile as input it would delete all three lines but I want it to only remove the first line. How do I do this?
With grep
grep -x -F -v -- "$1" file
# or
grep -xFv -- "$1" file
-F is for "fixed strings" -- turns off regex engine.
-x is to match entire line.
-v is for "everything but" the matched line(s).
-- to signal the end of options, in case $1 starts with a hyphen.
To save the file
grep -xFv -- "$1" file | sponge file # `moreutils` package
# or
tmp=$(mktemp)
grep -xFv -- "$1" file > "$tmp" && mv "$tmp" file
So match the whole line.
var=testfile
sed -i '/^'"$var"'$/d' file
# or with " quoting
sed -i "/^$var\$/d" file
You can learn regex with fun online with regex crosswords.

Using a single sed call to split and grep

This is mostly by curiosity, I am trying to have the same behavior as:
echo -e "test1:test2:test3"| sed 's/:/\n/g' | grep 1
in a single sed command.
I already tried
echo -e "test1:test2:test3"| sed -e "s/:/\n/g" -n "/1/p"
But I get the following error:
sed: can't read /1/p: No such file or directory
Any idea on how to fix this and combine different types of commands into a single sed call?
Of course this is overly simplified compared to the real usecase, and I know I can get around by using multiple calls, again this is just out of curiosity.
EDIT: I am mostly interested in the sed tool, I already know how to do it using other tools, or even combinations of those.
EDIT2: Here is a more realistic script, closer to what I am trying to achieve:
arch=linux64
base=https://chromedriver.storage.googleapis.com
split="<Contents>"
curl $base \
| sed -e 's/<Contents>/<Contents>\n/g' \
| grep $arch \
| sed -e 's/^<Key>\(.*\)\/chromedriver.*/\1/' \
| sort -V > out
What I would like to simplify is the curl line, turning it into something like:
curl $base \
| sed 's/<Contents>/<Contents>\n/g' -n '/1/p' -e 's/^<Key>\(.*\)\/chromedriver.*/\1/' \
| sort -V > out
Here are some alternatives, awk and sed based:
sed -E "s/(.*:)?([^:]*1[^:]*).*/\2/" <<< "test1:test2:test3"
awk -v RS=":" '/1/' <<< "test1:test2:test3"
# or also
awk 'BEGIN{RS=":"} /1/' <<< "test1:test2:test3"
Or, using your logic, you would need to pipe a second sed command:
sed "s/:/\n/g" <<< "test1:test2:test3" | sed -n "/1/p"
See this online demo. The awk solution looks cleanest.
Details
In sed solution, (.*:)?([^:]*1[^:]*).* pattern matches an optional sequence of any 0+ chars and a :, then captures into Group 2 any 0 or more chars other than :, 1, again 0 or more chars other than :, and then just matches the rest of the line. The replacement just keeps Group 2 contents.
In awk solution, the record separator is set to : and then /1/ regex is used to only return the record having 1 in it.
This might work for you (GNU sed):
sed 's/:/\n/;/^[^\n]*1/P;D' file
Replace each : and if the first line in the pattern space contains 1 print it.
Repeat.
An alternative:
sed -Ez 's/:/\n/g;s/^[^1]*$//mg;s/\n+/\n/;s/^\n//' file
This slurps the whole file into memory and replaces all colons by newlines. All lines that do not contain 1 are removed and surplus newlines deleted.
An alternative to the really ugly sed is: grep -o '\w*2\w*'
$ printf "test1:test2:test3\nbob3:bob2:fred2\n" | grep -o '\w*2\w*'
test2
bob2
fred2
grep -o: only matching
Or: grep -o '[^:]*2[^:]*'
echo -e "test1:test2:test3" | sed -En 's/:/\n/g;/^[^\n]*2[^\n]*(\n|$)/P;//!D'
sed -n doesn't print unless told to
sed -E allows using parens to match (\n|$) which is newline or the end of the pattern space
P prints the pattern buffer up to the first newline.
D trims the pattern buffer up to the first newline
[^\n] is a character class that matches anything except a newline
// is sed shorthand for repeating a match
//! is then matching everything that didn't match previously
So, after you split into newlines, you want to make sure the 2 character is between the start of the pattern buffer ^ and the first newline.
And, if there is not the character you are looking for, you want to D delete up to the first newline.
At that point, it works for one line of input, with one string containing the character you're looking for.
To expand to several matches within a line, you have to ta, conditionally branch back to label :a:
$ printf "test1:test2:test3\nbob3:bob2:fred2\n" | \
sed -En ':a s/:/\n/g;/^[^\n]*2[^\n]*(\n|$)/P;D;ta'
test2
bob2
fred2
This is simply NOT a job for sed. With GNU awk for multi-char RS:
$ echo "test1:test2:test3:test4:test5:test6"| awk -v RS='[:\n]' '/1/'
test1
$ echo "test1:test2:test3:test4:test5:test6"| awk -v RS='[:\n]' 'NR%2'
test1
test3
test5
$ echo "test1:test2:test3:test4:test5:test6"| awk -v RS='[:\n]' '!(NR%2)'
test2
test4
test6
$ echo "foo1:bar1:foo2:bar2:foo3:bar3" | awk -v RS='[:\n]' '/foo/ || /2/'
foo1
foo2
bar2
foo3
With any awk you'd just have to strip the \n from the final record before operating on it:
$ echo "test1:test2:test3:test4:test5:test6"| awk -v RS=':' '{sub(/\n$/,"")} /1/'
test1

How to extract text from file to file using sed or grep?

My example string is in txt file /www/meteo/last.txt:
a:3:{i:0;s:4:"6.13";i:1;s:5:"19.94";i:2;s:5:"22.13";}
I would like to get line by line 3 numbers from that file to a new file.
(those values is temperature so they are changing in time - every 10 minutes)
New file /www/meteo/new.txt: (line by line)
6.13
19.94
22.13
Try this awk method
awk -F'"' 'BEGIN{OFS="\n"} {print $2,$4,$6}' last.txt > new.txt
OutPut:
cat new.txt
6.13
19.94
22.13
Or, if you wanted to use sed or grep:
sed -r 's/([^"]*)("[^"]*")([^"]*)/\2\n/g;s/"//g' /www/meteo/last.txt
grep -Eo '"[^"]*"' /www/meteo/last.txt | sed 's/"//g'
If you want a specific value, lets say the second temperature in quotes you can use sed:
grep -Eo '"[^"]*"' /www/meteo/last.txt | sed -n '2p'

sed does not recognize -r flag on AIX

thanks in advance for the help.
I have the following line that does work on linux.
myfile (extract)
active_instance_count=
aq_tm_processes=1
archive_lag_target=0
audit_file_dest=?/rdbms/audit
audit_sys_operations=FALSE
audit_trail=NONE
background_core_dump=partial
background_dump_dest=/home1/oracle/app/oracle/admin/iopecom/bdump
...
cat myfile |sed -r 's/ {1,}//g'|sed -r 's/\t*//g' |grep -v "^#"|sed -s "/^$/d" |sed =|sed 'N;s/\n/\t/'|sed -r "s/#.*//g" | sed "s/\t/;/g"|sed "s/\t/;/g"|sed -e "s,',\o042,g"
The result will be:
1;O7_DICTIONARY_ACCESSIBILITY=TRUE
2;active_instance_count=
3;aq_tm_processes=1
4;archive_lag_target=0
5;audit_file_dest=?/rdbms/audit
6;audit_sys_operations=FALSE
7;audit_trail=NONE
8;background_core_dump=partial
9;background_dump_dest=/home1/oracle/app/oracle/admin/iopecom/bdump
But, I can't figure out, how to perform the same command on AIX server.
Help is very welcome.
Regards.
Antonio.
Unless you have a compelling reason to use sed, you could use alternate tools:
awk -v OFS=';' '{print NR,$0}' filename
would produce the desired output.
You could also use perl:
perl -ne 'print "$.;$_"' filename
It appears that your sed expression would skip lines beginning with a #. As such, you could say:
perl -ne '$,=";"; !/^#/ && print ++$i,$_' filename
or something like:
grep -v '^#' filename | awk ...
reformatting your pipeline:
cat myfile |
sed -r 's/ {1,}//g' | # strip all spaces (1)
sed -r 's/\t*//g' | # strip all tabs (2)
grep -v "^#" | # delete all lines beginning `#` (3)
sed -s "/^$/d" | # delete all empty lines (4)
sed = | # interleave with line numbers (5)
sed 'N;s/\n/\t/' | # join line number and line with `\t` (6)
sed -r "s/#.*//g" | # strip all `#` comments (7)
sed "s/\t/;/g" | # replace all tabs with `;` (8)
sed "s/\t/;/g" | # do it again (9)
sed -e "s,',\o042,g" # replace all ' with " (10)
Boiling that down and using cat -n to provide the line numbers up front gets:
cat -n myfile |
sed "$(print 's/\t/;/')
$(print 's/[ \t]*//g')
s/#.*//g
/^$/d
s/'/\"/g"
which behaves identically unless I'm misreading the aix docs. The $(...) construction is command substitution, it runs that command and substitutes its output. print would be printf on linux.

AWK/SED. How to remove parentheses in simple text file

I have a text file looking like this:
(-9.1744438E-02,7.6282293E-02) (-9.1744438E-02,7.6282293E-02) ... and so on.
I would like to modify the file by removing all the parenthesis and a new line for each couple
so that it look like this:
-9.1744438E-02,7.6282293E-02
-9.1744438E-02,7.6282293E-02
...
A simple way to do that?
Any help is appreciated,
Fred
I would use tr for this job:
cat in_file | tr -d '()' > out_file
With the -d switch it just deletes any characters in the given set.
To add new lines you could pipe it through two trs:
cat in_file | tr -d '(' | tr ')' '\n' > out_file
As was said, almost:
sed 's/[()]//g' inputfile > outputfile
or in awk:
awk '{gsub(/[()]/,""); print;}' inputfile > outputfile
This would work -
awk -v FS="[()]" '{for (i=2;i<=NF;i+=2) print $i }' inputfile > outputfile
Test:
[jaypal:~/Temp] cat file
(-9.1744438E-02,7.6282293E-02) (-9.1744438E-02,7.6282293E-02)
[jaypal:~/Temp] awk -v FS="[()]" '{for (i=2;i<=NF;i+=2) print $i }' file
-9.1744438E-02,7.6282293E-02
-9.1744438E-02,7.6282293E-02
This might work for you:
echo "(-9.1744438E-02,7.6282293E-02) (-9.1744438E-02,7.6282293E-02)" |
sed 's/) (/\n/;s/[()]//g'
-9.1744438E-02,7.6282293E-02
-9.1744438E-02,7.6282293E-02
Guess we all know this, but just to emphasize:
Usage of bash commands is better in terms of time taken for execution, than using awk or sed to do the same job. For instance, try not to use sed/awk where grep can suffice.
In this particular case, I created a file 100000 lines long file, each containing characters "(" as well as ")". Then ran
$ /usr/bin/time -f%E -o log cat file | tr -d "()"
and again,
$ /usr/bin/time -f%E -ao log sed 's/[()]//g' file
And the results were:
05.44 sec : Using tr
05.57 sec : Using sed
cat in_file | sed 's/[()]//g' > out_file
Due to formatting issues, it is not entirely clear from your question whether you also need to insert newlines.