I need to change a string from a file, which repeats on every row, but I need to change it only for couple of rows (not for all rows).
Lets say, the /etc/fstab file:
/dev/mapper/rhel-root / xfs defaults 0 0
/dev/mapper/rhel-boot /boot ext4 defaults 1 2
/dev/mapper/rhel-home /home ext3 defaults 1 2
/dev/mapper/rhel-var /var ext3 defaults 1 2
/dev/mapper/workvg-swaplv swap swap defaults 0 0
How can I change the "defaults" string to "string" only for /home and /var entries?
the thinks I've tried are using a lot of combinations of grep|sed|exargs etc., but it doesn't work as expected :)
echo $(egrep "/var | /home" fstab | awk '{ print $4}') | xargs -I '{}' sed 's/{}/string/' fstab
Can someone gives me some idea regarding this?
Thank you in advance guys!
$ sed -E '/\/(home|var)/ s/defaults/string/' fstab
/dev/mapper/rhel-root / xfs defaults 0 0
/dev/mapper/rhel-boot /boot ext4 defaults 1 2
/dev/mapper/rhel-home /home ext3 string 1 2
/dev/mapper/rhel-var /var ext3 string 1 2
/dev/mapper/workvg-swaplv swap swap defaults 0 0
-E use extended regex
/\/(home|var)/ this is to match only those lines with /home or /var
s/defaults/string/ substitute required pattern with replacement string
Edit:
For more robust matching:
sed -E '/^\S+\s+\/(home|var)\b/ s/^((\S+\s+){3})defaults\b/\1string/' fstab
This checks for exact match of /home or /var in 2nd column and replaces exact match of defaults in 4th column
\b checks for word boundary, \s matches whitespace, \S matches other than whitespace
This will work robustly for all input lines and for all values of defaults and string that do not contain backslashes (an easy tweak if you do have those):
$ cat tst.awk
BEGIN { split(entries,t); for (i in t) tgts["/"t[i]] }
($2 in tgts) && match($0,/^((\S+\s+){3})(\S+)(\s.*)$/,a) && (a[3]==old) { $0 = a[1] new a[4] }
{ print }
$ awk -v entries="home var" -v old="defaults" -v new="string" -f tst.awk file
/dev/mapper/rhel-root / xfs defaults 0 0
/dev/mapper/rhel-boot /boot ext4 defaults 1 2
/dev/mapper/rhel-home /home ext3 string 1 2
/dev/mapper/rhel-var /var ext3 string 1 2
/dev/mapper/workvg-swaplv swap swap defaults 0 0
It uses GNU awk for the 3rd arg to match(), other awks would use substr().
$ awk '/\/var/ || /\/home {sub(/defaults/, "string")} 1'` /etc/fstab
/\/var || /\/home if keywords in the record
sub{/defaults/, "string") replace first "defaults" with "string"
1 print changed and unchanged records
awk '/home|var/{sub(/defaults/,"string ")}1' file
/dev/mapper/rhel-root / xfs defaults 0 0
/dev/mapper/rhel-boot /boot ext4 defaults 1 2
/dev/mapper/rhel-home /home ext3 string 1 2
/dev/mapper/rhel-var /var ext3 string 1 2
/dev/mapper/workvg-swaplv swap swap defaults 0 0
Related
How can I remove useless ".0" strings in a txt file of numbers?
If I have this file:
43.65 24.0 889.5 5.0
32.14 32.0 900.0 6.0
38.27 43.0 899.4 5.0
I want to get:
43.65 24 889.5 5
32.14 32 900 6
38.27 43 899.4 5
I tried: sed 's|\.0 | |g' but that obviously does not work with new lines and EOF.
Any suggestion without getting into python, etc?
This might work for you (GNU sed):
sed 's/\.0\b//g' file
Or
sed 's/\.0\>//g' file
Remove any period followed by a zero followed by a word boundary.
You can use
sed -E 's,([0-9])\.0($| ),\1\2,g' file
Details:
-E - enables POSIX ERE syntax
([0-9])\.0($| ) - finds and captures into Group 1 a digit, then matches .0, and then matches and captures into Group 2 a space or end of string
\1\2 - replaces with Group 1 + Group 2 concatenated values
g - replaces all occurrences.
See the online demo:
s='43.65 24.0 889.5 5.0
32.14 32.0 900.0 6.0
38.27 43.0 899.4 5.0'
sed -E 's,([0-9])\.0($| ),\1\2,g' <<< "$s"
Output:
43.65 24 889.5 5
32.14 32 900 6
38.27 43 899.4 5
I want to extract the value with op/s
pgmap 512 pgs: 512 active+clean; 1.39TiB data, 4.08TiB used, 1.15TiB / 5.24TiB avail; 4.43MiB/s wr, 46op/s
pginfo="pgmap 512 pgs: 512 active+clean; 1.39TiB data, 4.08TiB used, 1.15TiB / 5.24TiB avail; 4.43MiB/s wr, 46op/s"
echo $pginfo | sed -n '/pgmap/s/.* \([0-9]*\) op\/s.*/\1/p'
But it does not return anything. Any help pointers will be appreciated.
Your regex is looking for a space after the digits but there isn't one.
Also you should quote the echoed variable to prevent it being mangled.
echo "$pginfo" | sed -n '/pgmap/s/.* \([0-9][0-9]*\)op\/s.*/\1/p'
I have two files of the type:
File1.txt
1 117458 rs184574713 rs184574713
1 119773 rs224578963 rs224500000
1 120000 rs224578874 rs224500045
1 120056 rs60094200 rs60094200
2 120056 rs60094536 rs60094536
File2.txt
10 120200 120400 A 0 189,183,107
1 0 119600 C 0 233,150,122
1 119600 119800 D 0 205,92,92
1 119800 120400 F 0 192,192,192
2 120400 122000 B 0 128,128,128
2 126800 133200 A 0 192,192,192
I want to add the information contained in the second file to the first file. The first column in both files needs to match, while the second column in File1.txt should fall in the interval that is indicated by columns 2 and 3 in File2.txt. So that the output should look like this:
1 117458 rs184574713 rs184574713 C 0 233,150,122
1 119773 rs224578963 rs224500000 D 0 205,92,92
1 120000 rs224578874 rs224500045 F 0 192,192,192
1 120056 rs60094200 rs60094200 F 0 192,192,192
2 120440 rs60094536 rs60094536 B 0 128,128,128
Please help me with awk/perl.. or any other script.
This is how you would do it in bash (with a little help from awk):
join -1 1 -2 1 \
<(sort -n -k1,1 test1) <(sort -n -k1,1 test2) | \
awk '$2 >= $5 && $2 <= $6 {print $1, $2, $3, $4, $7, $8, $9}'
Here is a brief explanation.
First, we use join to join lines based on the common key (the
first field).
But join expects both input files to be already sort (hence
sort).
At least, we employ awk to apply the required condition, and to
project the fields we want.
Try this: (Considering the fact that there is a typo in your output for last entry. 120056 is not between 120400 122000.
$ awk '
NR==FNR {
a[$1,$2,$3]=$4 FS $5 FS $6;
next
}
{
for(x in a) {
split(x,tmp,SUBSEP);
if($1==tmp[1] && $2>=tmp[2] && $2<=tmp[3])
print $0 FS a[x]
}
}' file2 file1
1 117458 rs184574713 rs184574713 C 0 233,150,122
1 119773 rs224578963 rs224500000 D 0 205,92,92
1 120000 rs224578874 rs224500045 F 0 192,192,192
1 120056 rs60094200 rs60094200 F 0 192,192,192
You read through the first file creating an array indexed at column 1,2 and 3 having values of column 4,5 and 6.
For the second file, you look up in your array. For every key, you split the key and check for your condition of first column matching and second column to be in range.
If the condition is true you print the entire line from file 1 followed by the value of array.
I've been trying to pull a field from a row in a file although each row may have plus or minus 2 or 3 fields per row. They aren't always equal in the number of fields per row.
Here is a snippet:
A orarpp 45286124 1 1 0 20 60 Nov 25 9-16:42:32 01:04:58 11176 117056 0 - oracleXXX (LOCAL=NO)
A orarpp 45351560 1 1 3 20 61 Nov 30 5-03:54:42 02:24:48 4804 110684 0 - ora_w002_XXX
A orarpp 45548236 1 1 22 20 71 Nov 26 8-19:36:28 00:56:18 10628 116508 0 - oracleXXX (LOCAL=NO)
A orarpp 45679190 1 1 0 20 60 Nov 28 6-23:42:20 00:37:59 10232 116112 0 - oracleXXX (LOCAL=NO)
A orarpp 45744808 1 1 0 20 60 10:52:19 23:08:12 00:04:58 11740 117620 0 - oracleXXX (LOCAL=NO)
A root 45810380 1 1 0 -- 39 Nov 25 9-19:54:34 00:00:00 448 448 0 - garbage
In the case of the first line, I'm interested in 9-16:42:32 and the similar fields for each row.
I've tried to pull it by using ':' as the field separator and then filter from there however, what I am trying to accomplish is to do something if the number before the dash (in the example it's 9) is greater than one.
cat file.txt | grep oracle | awk -F: '{print substr($1, length($1)-5)}'
This is because the number of fields on either side of the actual field I need can be different from line to line.
Definitely not the most efficient but I've been trying to do this with an awk one liner.
Hints or a direction would be appreciated to get me moving again. I am not opposed to doing in a better way than awk.
Thanks.
Maybe cut is the right tool for this job? For example, with your snippet:
$ cut -c 62-71 file.txt
9-16:42:32
5-03:54:42
8-19:36:28
6-23:42:20
23:08:12
9-19:54:34
The arguments tell cut to snip columns (-c) 62 through 71.
For additional processing, you can pipe it to awk.
You can also accomplish the whole thing in awk by accepting entire lines and then using substr to extract the columns you want. For example, this awk command produces the same output as the cut command above:
awk '{ print substr($0, 62, 10) }' file.txt
Whether you create a pipeline or do the processing entirely in awk is at least in part a matter of personal taste / style.
Would this do?
awk -F: '/oracle/ {print substr($0,62,10)}' file.txt
9-16:42:32
8-19:36:28
6-23:42:20
23:08:12
This search for oracle and then print 10 characters starting from position 62
You can grab those identifiers with one of
grep -o '[[:digit:]]\+-[[:digit:]]\{2\}:[[:digit:]]\{2\}:[[:digit:]]\{2\}'
grep -oP '\d+-\d\d:\d\d:\d\d' # GNU grep
It sounds like you want to do something with the lines, not just find the ids. Please elaborate.
Using GNU awk:
gawk --re-interval '
/oracle/ && \
match($0, /([[:digit:]]+)-([[:digit:]]{2}:){2}[[:digit:]]{2}/, a) && \
a[1]>1 {
# do something with the matching line
print
}
' file
Q1: Sed specify the whole line and if the line is nothing but the string then delete
I have a file that contains several of the following numbers:
1 1
3 1
12 1
1 12
25 24
23 24
I want to delete numbers that are the same in each line. For that I have either been using:
sed '/1 1/d' < old.file > new.file
OR
sed -n '/1 1/!p' < old.file > new.file
Here is the main problem. If I search for pattern '1 1' that means I get rid of '1 12' as well. So for I want the pattern to specify the whole line and if it does, to delete it.
Q2: Automation of question 1
I am also trying to automate this problem. The range of numbers in the first column and the second column could be from 1 to 25.
So far this is what I got:
for ((i=1;i<26;i++)); do
sed "/'$i' '$i'/d" < oldfile > newfile; mv newfile oldfile;
done
This does nothing to the oldfile in the end. :(
This would be more readable with awk:
awk '$1 == $2 {next} {print}' oldfile > newfile
Update based on comment:
If the requirement is to remove lines where the two values are within 1 of each other:
awk '{d = $1-$2; if (-1 <= d && d <= 1) next; else print}' oldfile
Unfortunately, awk does not have abs() (at least nawk and gawk don't)
Just put the first number in a group (\([0-9]*\)) and then look for it with a backreference (\1). Since the line to delete should contain only the group, repeated, use the ^ to mark the beginning of line and the $ to mark the end of line. For example, for the following file:
$ cat input
1 1
3 1
12 1
1 12
12 12
12 13
13 13
25 24
23 24
...the result is:
$ sed '/^\([0-9]*\) \1$/d' input
3 1
12 1
1 12
12 13
25 24
23 24
You can also do it with grep:
grep -E -v "([0-9])*\s\1" testfile
Look for multiple digits in a row and remember them, followed by a single whitespace, followed by whatever digits you remembered.