insert to the middle of a line in a textfile sed - sed

I am new to sed script. I have been researching how to add text to a line in file.
so the line I have in the text file looks like this
hosts allow = 192.168.122. 172.24.0
i want to add IP so line looks like
hosts allow = 192.168.122. 192.12.0 172.24.0
Through trial and error I only have:
sed -i '/allow/ s/.*/&,192.12.0./' testfile
which gives:
hosts allow = 192.168.122. 172.24.0. 192.12.0.

Check this test :
$ cat file1
hosts allow = 192.168.122. 172.24.0
hosts allow = 111.111.111. 222.222.222 333.333.333
$ sed -E 's/(.* = .*)( .*)/\1 george \2/g' file1
hosts allow = 192.168.122. george 172.24.0
hosts allow = 111.111.111. 222.222.222 george 333.333.333
$ sed -r 's/(.* = .[^ ]*)(.*)/\1 george \2/g' file1
hosts allow = 192.168.122. george 172.24.0
hosts allow = 111.111.111. george 222.222.222 333.333.333
This test uses sed with extended regex support (-E or -r switch) and capturing groups enclosed within ( )

Your question is too vague to say for sure (do you want to add a string to the middle of a set or after the first one or before the last one or at a fixed location or something else) but this might be what you're looking for:
$ sed 's/ [^[:space:]]*/& 192.12.0/2' file
hosts allow = 192.12.0 192.168.122. 172.24.0
$ sed 's/ [^[:space:]]*/& 192.12.0/3' file
hosts allow = 192.168.122. 192.12.0 172.24.0
$ sed 's/ [^[:space:]]*/& 192.12.0/4' file
hosts allow = 192.168.122. 172.24.0 192.12.0

Here's an awk solution, which I find more legible than sed solutions:
awk '$1$2$3 == "hostsallow=" { $NF="192.12.0 $NF" } { print }' \
testfile > tmp && mv tmp testfile
This takes the first three columns and ensures they concatenate to "hostsallow=" without spaces (I doubt you'll run into a host sallow = … line), then takes the value of the last column (identified as NF for the number of fields and whose value is identified with a $) and assigns the new value, a space, and the old value to it. All lines are then printed. Warning, this will collapse whitespace (in this case, I see that as a feature).
Unfortunately, awk doesn't have a -i option the way sed does, so this has to be saved in a temporary file and then put back on top of the original file.

Related

sed issue replacing a negative value

I have a file where some entries look like:
EMIG_BAD_ID = syscall.Errno( -0x12f)
I want to use sed to replace that negative number to make it positive,
EMIG_BAD_ID = syscall.Errno( 0x12f)
I've tried some ideas from web searches but not succeded.
E.g. this one exits with an error:
egrep EMIG_* _error.grep | \
sed -e 's/syscall.Errno(\1)/syscall.Errno(-\1)/g' _error.grep
sed: -e expression #1, char 40: Invalid back reference
What is wrong here?
If the format is exactly like you posted, you can use this fairly easy replacement:
sed 's/syscall.Errno( -/syscall.Errno( /g' _error.grep
To make the space between ( and - optional:
sed 's/syscall.Errno( \?-/syscall.Errno( /g' _error.grep
To answer your question / if you insist on using back references (and optional space), here's how to use back references:
sed 's/syscall.Errno( \?-\(.*\))/syscall.Errno(\1)/g' _error.grep
Some additional notes:
You don't need grep - if EMIG_BAD_ID is on the same line it's very easy to include that in the sed matching pattern.
You pipe from egrep to sed and let sed read from a file. That doesn't make sense. You should prefer reading files directly with sed; but if you need grep, just read from stdin (without the file argument). Specify -i with sed to perform an in-place edit.
Using sed
$ sed -E '/^EMIG/s/-([0-9]+)/\1/' input_file
EMIG_BAD_ID = syscall.Errno( 0x12f)
Thank you for your answers. Unfortunately I don't have a working answer yet.
Code below: (part of a bigger file)
cat _error.out
E2BIG = 0x40000007
EMIG_ARRAY_TOO_LARGE = -0x133
cat test.sh
cat _error.out | sed 's/=\(.*\)/= syscall.Errno(\1)/'
Result:
E2BIG = syscall.Errno( 0x40000007)
EMIG_ARRAY_TOO_LARGE = syscall.Errno( -0x133)
The problem is to use bash/grep/sed to make the negative number positive.
Thanks!

Simple method for finding and replacing string linux

I'm currently trying to find a line in a file
#define IMAX 8000
and replacing 8000 with another number.
Currently, stuck trying to pipe arguments from awk into sed.
grep '#define IMAX' 1d_Euler_mpi_test.c | awk '{print $3}' | sed
Not too sure how to proceed from here.
I would do something like:
sed -i '' '/^#define IMAX 8000$/s/8000/NEW_NUMBER/' 1d_Euler_mpi_test.c
Could you please try following. Place new number's value in place of new_number too.(tested this with GNU sed)
echo "#define IMAX 8000" | sed -E '/#define IMAX /s/[0-9]+$/new_number/'
In case you are reading input from an Input_file and want to save its output into Input_file itself use following then.
sed -E '/#define IMAX /s/[0-9]+$/new_number/' Input_file
Add -i flag in above code in case you want to save output into Input_file itself. Also my codes will catch any digits which are coming at the end of the line which has string #define IMAX so in case you only want to look for 8000 or any fixed number change [0-9]+$ to 8000 etc in above codes then.
You may use GNU sed.
sed -i -e 's/IMAX 8000/IMAX 9000/g' /tmp/file.txt
Which will invoke sed to do an in-place edit due to the -i option. This can be called from bash.
If you really really want to use just bash, then the following can work:
while read a ; do echo ${a//IMAX 8000/IMAX 9000} ; done < /tmp/file.txt > /tmp/file.txt.t ; mv /tmp/file.txt{.t,}
This loops over each line, doing a substitution, and writing to a temporary file (don't want to clobber the input). The move at the end just moves temporary to the original name.

Remove all lines before a match with sed

I'm using sed to filter a list of files. I have a sorted list of folders and I want to get all lines after a specific one. To do this task I'm using the solution described here which works pretty well with any input I tried but it doesn't work when the match is on the first line. In that case sed will remove all lines of the input
Here it's an example:
$ ls -1 /
bin
boot
...
sys
tmp
usr
var
vmlinuz
$ ls -1 / | sed '1,/tmp/d'
usr
var
vmlinuz
$ ls -1 / | sed '1,/^bin$/d'
# sed will delete all lines from the input stream
How should I change the command to consider also the limit case when first line is matched by regexp?
BTW sed '1,1d' correctly works and remove the first line only.
try this (GNU sed only):
sed '0,/^bin$/d'
..output is:
$sed '0,/^bin$/d' file
boot
...
sys
tmp
usr
var
vmlinuz
This sed command will print all lines after and including the matching line:
sed -n '/^WHATEVER$/,$p'
The -n switch makes sed print only when told (the p command).
If you don't want to include the matching line you can tell sed to delete from the start of the file to the matching line:
sed '1,/^WHATEVER$/d'
(We use the d command which deletes lines.)
you can also try with :
awk '/searchname/{p=1;next}{if(p){print}}'
EDIT(considering the comment from Joe)
awk '/searchname/{p++;if(p==1){next}}p' Your_File
I would insert a tag before a match and delete in scope /start/,/####tag####/.

In-place replacement

I have a CSV. I want to edit the 35th field of the CSV and write the change back to the 35th field. This is what I am doing on bash:
awk -F "," '{print $35}' test.csv | sed -i 's/^0/+91/g'
so, I am pulling the 35th entry using awk and then replacing the "0" in the starting position in the string with "+91". This one works perfet and I get desired output on the console.
Now I want this new entry to get written in the file. I am thinking of sed's "in -place" replacement feature but this fetuare needs and input file. In above command, I cannot provide input file because my primary command is awk and sed is taking the input from awk.
Thanks.
You should choose one of the two tools. As for sed, it can be done as follows:
sed -ri 's/^(([^,]*,){34})0([^,]*)/\1+91\3/' test.csv
Not sure about awk, but #shellter's comment might help with that.
The in-place feature of sed is misnamed, as it does not edit the file in place. Instead, it creates a new file with the same name. eg:
$ echo foo > foo
$ ln -f foo bar
$ ls -i foo bar # These are the same file
797325 bar 797325 foo
$ echo new-text > foo # Changes bar
$ cat bar
new-text
$ printf '/new/s//newer\nw\nq\n' | ed foo # Edit foo "in-place"; changes bar
9
newer-text
11
$ cat bar
newer-text
$ ls -i foo bar # Still the same file
797325 bar 797325 foo
$ sed -i s/new/newer/ foo # Does not edit in-place; creates a new file
$ ls -i foo bar
797325 bar 792722 foo
Since sed is not actually editing the file in place, but writing a new file and then renaming it to the old file, you might as well do the same.
awk ... test.csv | sed ... > test.csv.1 && mv test.csv.1 test.csv
There is the misperception that using sed -i somehow avoids the creation of the temporary file. It does not. It just hides the fact from you. Sometimes abstraction is a good thing, but other times it is unnecessary obfuscation. In the case of sed -i, it is the latter. The shell is really good at file manipulation. Use it as intended. If you do need to edit a file in place, don't use the streaming version of ed; just use ed
So, it turned out there are numerous ways to do it. I got it working with sed as below:
sed -i 's/0\([0-9]\{10\}\)/\+91\1/g' test.csv
But this is little tricky as it will edit any entry which matches the criteria. however in my case, It is working fine.
Similar implementation of above logic in perl:
perl -p -i -e 's/\b0(\d{10})\b/\+91$1/g;' test.csv
Again, same caveat as mentioned above.
More precise way of doing it as shown by Lev Levitsky because it will operate specifically on the 35th field
sed -ri 's/^(([^,]*,){34})0([^,]*)/\1+91\3/g' test.csv
For more complex situations, I will have to consider using any of the csv modules of perl.
Thanks everyone for your time and input. I surely know more about sed/awk after reading your replies.
This might work for you:
sed -i 's/[^,]*/+91/35' test.csv
EDIT:
To replace the leading zero in the 35th field:
sed 'h;s/[^,]*/\n&/35;/\n0/!{x;b};s//+91/' test.csv
or more simply:
|sed 's/^\(\([^,]*,\)\{34\}\)0/\1+91/' test.csv
If you have moreutils installed, you can simply use the sponge tool:
awk -F "," '{print $35}' test.csv | sed -i 's/^0/+91/g' | sponge test.csv
sponge soaks up the input, closes the input pipe (stdin) and, only then, opens and writes to the test.csv file.
As of 2015, moreutils is available in package repositories of several major Linux distributions, such as Arch Linux, Debian and Ubuntu.
Another perl solution to edit the 35th field in-place:
perl -i -F, -lane '$F[34] =~ s/^0/+91/; print join ",",#F' test.csv
These command-line options are used:
-i edit the file in-place
-n loop around every line of the input file
-l removes newlines before processing, and adds them back in afterwards
-a autosplit mode – split input lines into the #F array. Defaults to splitting on whitespace.
-e execute the perl code
-F autosplit modifier, in this case splits on ,
#F is the array of words in each line, indexed starting with 0
$F[34] is the 35 element of the array
s/^0/+91/ does the substitution

Filter text based in a multiline match criteria

I have the following sed command. I need to execute the below command in single line
cat File | sed -n '
/NetworkName/ {
N
/\n.*ims3/ p
}' | sed -n 1p | awk -F"=" '{print $2}'
I need to execute the above command in single line. can anyone please help.
Assume that the contents of the File is
System.DomainName=shayam
System.Addresses=Fr6
System.Trusted=Yes
System.Infrastructure=No
System.NetworkName=AS
System.DomainName=ims5.com
System.DomainName=Ram
System.Addresses=Fr9
System.Trusted=Yes
System.Infrastructure=No
System.NetworkName=Peer
System.DomainName=ims7.com
System.DomainName=mani
System.Addresses=Hello
System.Trusted=Yes
System.Infrastructure=No
System.NetworkName=Peer
System.DomainName=ims3.com
And after executing the command you will get only peer as the output. Can anyone please help me out?
You can use a single nawk command. And you can lost the useless cat
nawk -F"=" '/NetworkName/{n=$2;getline;if($2~/ims3/){print n} }' file
You can use sed as well as proposed by others, but i prefer less regex and less clutter.
The above save the value of the network name to "n". Then, get the next line and check the 2nd field against "ims3". If matched, then print the value of "n".
Put that code in a separate .sh file, and run it as your single-line command.
cat File | sed -n '/NetworkName/ { N; /\n.*ims3/ p }' | sed -n 1p | awk -F"=" '{print $2}'
Assuming that you want the network name for the domain ims3, this command line works without sed:
grep -B 1 ims3 File | head -n 1 | awk -F"=" '{print $2}'
So, you want the network name where the domain name on the following line includes 'ims3', and not the one where the following line includes 'ims7' (even though the network names in the example are the same).
sed -n '/NetworkName/{N;/ims3/{s/.*NetworkName=\(.*\)\n.*/\1/p;};}' File
This avoids abuse of felines, too (not to mention reducing the number of commands executed).
Tested on MacOS X 10.6.4, but there's no reason to think it won't work elsewhere too.
However, empirical evidence shows that Solaris sed is different from MacOS sed. It can all be done in one sed command, but it needs three lines:
sed -n '/NetworkName/{N
/ims3/{s/.*NetworkName=\(.*\)\n.*/\1/p;}
}' File
Tested on Solaris 10.
You just need to put -e pretty much everywhere you'd break the command at a newline or have a semicolon. You don't need the extra call to sed or awk or cat.
sed -n -e '/NetworkName/ {' -e 'N' -e '/\n.*ims3/ s/[^\n]*=\(.*\).*/\1/P' -e '}' File