I have a file that contains 2 lines
Line1: " My name is ABCD".
Line2 "My name is XYZ".
I want to copy ALL lines with string "My name" and paste the entire line on the next line but also change the line with new string. EG: New file should look like:
Line1: "My name is ABCD"
Line2:"My name is ABCD and age 2"
Line3: "My name is XYZ"
Line4: "My name is XYZ and age 2"
Try this:
sed 's/My name.*/&\n& and age 2/' file
Explanations:
The pattern: My name.* search for lines containing My name followed by any characters .*
The substitution: & replaces with matching string, followed by a newline \n, followed by the matching string again \&, followed by and age 2
To edit the file in place, add the -i flag:
sed -i 's/My name.*/&\n& and age 2/' file
Could you please try following and let me know if this helps you.
awk '{Q=$0;sub(/\".$/," and age 2\".",Q);print $0 ORS Q}' Input_file
#!/usr/bin/awk -f
1
{
print $0, "and age 2"
}
Or:
#!/bin/sh
awk '1 {print $0, "and age 2"}' file
Related
I have file with this text:
mirrors:
docker.io:
endpoint:
- "http://registry:5000"
registry:5000:
endpoint:
- "http://registry:5000"
localhost:
endpoint:
- "http://registry:5000"
I need to replace it with this text in POSIX shell script (not bash):
mirrors:
docker.io:
endpoint:
- "http://docker.io"
registry:5000:
endpoint:
- "http://registry:5000"
localhost:
endpoint:
- "http://localhost"
Replace should be done dynamically in all places without hard-coded names. I mean we should take sub-string from a first line ("docker.io", "registry:5000", "localhost") and replace with it sub-string "registry:5000" in a third line.
I've figure out regex, that splits it on 5 groups: (^ )([^ ]*)(:[^"]*"http:\/\/)([^"]*)(")
Then I've tried to use sed to print group 2 instead of 4, but this didn't work: sed -n 's/\(^ \)\([^ ]*\)\(:[^"]*"http:\/\/\)\([^"]*\)\("\)/\1\2\3\2\5/p'
Please help!
This might work for you (GNU sed):
sed -E '1N;N;/\n.*endpoint:.*\n/s#((\S+):.*"http://)[^"]*#\1\2#;P;D' file
Open up a three line window into the file.
If the second line contains endpoint:, replace the last piece of text following http:// with the first piece of text before :
Print/Delete the first line of the window and then replenish the three line window by appending the next line.
Repeat until the end of the file.
Awk would be a better candidate for this, passing in the string to change to as a variable str and the section to change (" docker.io" or " localhost" or " registry:5000") and so:
awk -v findstr=" docker.io" -v str="http://docker.io" '
$0 ~ findstr { dockfound=1 # We have found the section passed in findstr and so we set the dockfound marker
}
/endpoint/ && dockfound==1 { # We encounter endpoint after the dockfound marker is set and so we set the found marker
found=1;
print;
next
}
found==1 && dockfound==1 { # We know from the found and the dockfound markers being set that we need to process this line
match($0,/^[[:space:]]+-[[:space:]]"/); # Match the start of the line to the beginning quote
$0=substr($0,RSTART,RLENGTH)str"\""; # Print the matched section followed by the replacement string (str) and the closing quote
found=0; # Reset the markers
dockfound=0
}1' file
One liner:
awk -v findstr=" docker.io" -v str="http://docker.io" '$0 ~ findstr { dockfound=1 } /endpoint/ && dockfound==1 { found=1;print;next } found==1 && dockfound==1 { match($0,/^[[:space:]]+-[[:space:]]"/);$0=substr($0,RSTART,RLENGTH)str"\"";found=0;dockfound=0 }1' file
I have an input file (customers.txt) that looks like this:
Name, Age, Email,
Hank, 22, hank#mail.com
Nathan, 32, nathan#mail.com
Gloria, 24, gloria#mail.com
I'm trying to have to output to a file (customersnew.txt) to have it look like this:
Name: Hank Age: 22 Email: hank#mail.com
Name: Nathan Age: 32 Email: nathan#mail.com
Name: Gloria Age: 24 Email: gloria#mail.com
So far, I've only been able to get an output like:
Name: Hank, 22, hank#mail.com
Name: Nathan, 32, nathan#mail.com
Name: Gloria, 24, gloria#mail.com
The code that I'm using is
sed -e '1d'\
-e 's/.*/Name: &/g' customers.txt > customersnew.txt
I've also tried separating the data using -e 's/,/\n/g'\ and then -e '2s/.*Age: &/g'. It doesn't work. Any help would be highly appreciated.
Have you considered using awk for this? Like:
$ awk 'BEGIN {FS=", ";OFS="\t"} NR==1 {split($0,hdr);next} {for(i=1;i<=NF;i++)$i=hdr[i]": "$i} 1' file
Name: Hank Age: 22 Email: hank#mail.com
Name: Nathan Age: 32 Email: nathan#mail.com
Name: Gloria Age: 24 Email: gloria#mail.com
This simply saves headers into an array and prepends each field in following records with <header>:.
This might work for you (GNU sed & column):
sed -E '1h;1d;G;s/^/,/;:a;s/,\s*(.*\n)([^,]+),/\2: \1/;ta;P;d' file | column -t
Copy the header to the hold space.
Append the header to each detail line.
Prepend a comma to the start of the line.
Create a substitution loop that replaces the first comma by the first heading in the appended header.
When all the commas have been replaced, print the first line and delete the rest.
To display in neat columns use the column command with the -t option.
Could you please try following.
awk '
BEGIN{
FS=", "
OFS="\t"
}
FNR==1{
for(i=1;i<=NF;i++){
value[i]=$i
}
next
}
{
for(i=1;i<=NF;i++){
$i=value[i] ": " $i
}
}
1
' Input_file
Explanation: Adding explanation for above solution.
awk ' ##Starting awk program from here.
BEGIN{ ##Starting BEGIN section of this program from here.
FS=", " ##Setting field separator as comma space here.
OFS="\t" ##Setting output field separator as TAB here for all lines.
}
FNR==1{ ##Checking here if this is first line then do following.
for(i=1;i<=NF;i++){ ##Starting a for loop to traverse through all elements of fields here.
value[i]=$i ##Creating an array named value with index variable i and value is current field value.
}
next ##next will skip all further statements from here.
}
{
for(i=1;i<=NF;i++){ ##Traversing through all fields of current line here.
$i=value[i] ": " $i ##Setting current field value adding array value with index i colon space then current fiedl value here.
}
}
1 ##1 will print all lines here.
' Input_file ##Mentioning Input_file name here.
I'm new to unix programming like sed, perl etc. I've searched and no result found match my case. I need to append substring from top line in the same file. My file content text.txt :
Name: sur.name.custom
Tel: xxx
Address: yyy
Website: www.site.com/id=
Name: sur.name.custom1
Tel: xxx
Address: yyy
Website: www.site.com/id=
I need to append every Name (sur.name.*) to every website on its block.
So Expected ouput:
Name: sur.name.custom
Tel: xxx
Address: yyy
Website: www.site.com/id=sur.name.custom
Name: sur.name.custom1
Tel: xxx
Address: yyy
Website: www.site.com/id=sur.name.custom1
I've tried the following sed command:
sed -n "/^Website:.*id=$/ s/$/sur.name..*/p" ./text.txt;
But sed returned: Website: www.site.com/id=sur.name.* same string I put.
I'm sure sed can append from regex pattern. I need both sed and perl if possible.
Why don't you use awk for this? Assuming names doesn't contain spaces following command should work:
awk '$1=="Name:"{name=$2} $1=="Website:"{print $0 name;next} 1' file
Perl equivalent:
perl -pale'
$F[0] eq "Name:" and $name = $F[1];
$F[0] eq "Website:" and $_ .= $name;
' file
(Line breaks may be removed.)
Here' a sed solution:
sed '/^Name:/{h;s/Name: *//;x;};/^Website:/{G;s/\n//;}' filename
Translation: If the line begins with Name:, save the name to the hold space; if the line starts with Website:, append the (latest) name from the holdspace.
I am trying replace a block of code between two patterns with blank lines
Tried using below command
sed '/PATTERN-1/,/PATTERN-2/d' input.pl
But it only removes the lines between the patterns
PATTERN-1 : "=head"
PATTERN-2 : "=cut"
input.pl contains below text
=head
hello
hello world
world
morning
gud
=cut
Required output :
=head
=cut
Can anyone help me on this?
$ awk '/=cut/{f=0} {print (f ? "" : $0)} /=head/{f=1}' file
=head
=cut
To modify the given sed command, try
$ sed '/=head/,/=cut/{//! s/.*//}' ip.txt
=head
=cut
//! to match other than start/end ranges, might depend on sed implementation whether it dynamically matches both the ranges or statically only one of them. Works on GNU sed
s/.*// to clear these lines
awk '/=cut/{found=0}found{print "";next}/=head/{found=1}1' infile
# OR
# ^ to take care of line starts with regexp
awk '/^=cut/{found=0}found{print "";next}/^=head/{found=1}1' infile
Explanation:
awk '/=cut/{ # if line contains regexp
found=0 # set variable found = 0
}
found{ # if variable found is nonzero value
print ""; # print ""
next # go to next line
}
/=head/{ # if line contains regexp
found=1 # set variable found = 1
}1 # 1 at the end does default operation
# print current line/row/record
' infile
Test Results:
$ cat infile
=head
hello
hello world
world
morning
gud
=cut
$ awk '/=cut/{found=0}found{print "";next}/=head/{found=1}1' infile
=head
=cut
This might work for you (GNU sed):
sed '/=head/,/=cut/{//!z}' file
Zap the lines between =head and =cut.
Can anyone help me figure out how to do this, it would be much appreciated.
example
block of //delete
non-important text //delete
important text //keep
more important text //keep
sed '1,/^$/d' file
or
awk '!$0{f=1;next}f{print}' file
Output
$ sed '1,/^$/d' <<< $'block of\nnon-important text\n\nimportant text\nmore important text'
important text
more important text
$ awk '!$0{f=1;next}f{print}' <<< $'block of\nnon-important text\n\nimportant text\nmore important text'
important text
more important text
If the blank line is empty, this'll do it:
sed '1,/^$/d' filename
with awk:
awk "BEGIN { x = 0 } /^$/ { x = 1} { if (x == 2) { print } ; if (x == 1) { x = 2 } }" filename
Another option with grep (working on lines)
grep -v PATTERN filename > newfilename
For example:
filename has the following lines:
this is not implicated but important text
this is not important text
this is important text he says
not important text he says
not this it is more important text
A filter of:
grep -v "not imp" filename > newfilename
would create newfilename with the following 3 lines:
this is not implicated but important text
this is important text he says
not this it is more important text
You would have to choose a PATTERN that would uniquely identify the lines you are trying to remove. If you use a PATTERN of "important text", it would match all of the lines while "not imp" only matches the lines that have the words "not imp" in them. Use egrep (or grep -E) for regexp filters if you want more flexibility in pattern matching.