My file content-
abc
def
I want to add two lines after abc with spaces before it
My final file content should be -
abc
123
def
I am using the below command, but not working for me, plz help me
sudo sed -i "/abc/a\\\123" file.txt
Note - There is no space between lines, I just want to put some spaces before the new line (i.e. before the line 123)
You can use this sed:
sed -i '/abc/a\ 123' file
Ex:
$ sed '/abc/a\ 123' file
abc
123
def
Following is awk solution ,if you are open for awk.
Sample input:
cat infile
abc
def
Explanation:
Check for pattern abc if found, update the current line with current line followed by newline followed by 123. And 1 invokes awk's default action of printing.
note : Newline is printed using awk's inbuilt variable called ORS ,which is default set to newline.
awk '/abc/ {$0=$0 ORS " 123" }1' infile
abc
123
def
To make changes in orignal file:
awk '/abc/ {$0=$0 ORS " 123" }1' infile >infile.tmp && mv infile.tmp infile
Just use awk for clarity, portability, etc.:
awk '{print} /abc/{print " 123"}' file
Related
On a Unix system I am trying to add a new line in a file using sed or perl but it seems I am missing something.
Supposing my file has multiple lines of texts, always ending like this {TNG:}}${1:F01.
I am trying to find a to way to add a new line after the }$, in this way {1 should always start on a new line.
I tried it by escaping $ sign using this:
perl -e '$/ = "\${"; while (<>) { s/\$}\{$/}\n{/; print; }' but it does not work.
Any ideas will be appreciated.
give this a try:
sed 's/{TNG:}}\$/&\n/' file > newfile
The sed will by default use BRE, that is, the {}s are literal characters. But we must escape the $.
kent$ cat f
{TNG:}}${1:F01.
kent$ sed 's/{TNG:}}\$/&\n/' f
{TNG:}}$
{1:F01.
With perl:
$ cat input.txt
line 1 {TNG:}}${1:F01
line 2 {TNG:}}${1:F01
$ perl -pe 's/TNG:\}\}\$\K/\n/' input.txt
line 1 {TNG:}}$
{1:F01
line 2 {TNG:}}$
{1:F01
(Read up on the -p and -n options in perlrun and use them instead of trying to do what they do in a one-liner yourself)
I have a TSV with fields that look like:
name location 1,2,3,4,5
When I use sed 's/\w/,/g'
i end up with a csv where 1,2,3,4 and 5 are considered seperate entrys.
I would like it to be '1 2 3 4 5'
I've tried converting commas to white space before running the above command using
sed 's/,/\w/g'
however when converting the whitespace back to commas it includes single white spaces as well as the tabs, so what is the regex for just a single whitespace character?
Desired output:
name, location,1 2 3 4 5,
As mentionned in a comment CSV usually deals with occurences of its separator character in values by enclosing the value in quotes, so I suggest you simply deal with this by enclosing every value in quotes :
sed -E 's/([^\t]*)(\t|$)/"\1",/g'
You can try it here.
This leaves a trailing comma as in your sample output, if you want to avoid it you can use the following :
sed -E 's/\t+$//;s/^/"/;s/\t/","/g;s/$/"/'
If your original data contains " you will however need to escape those, which you can achieve by adding the following substitution before the other(s) :
s/"/\\"/g
As Ed Morton suggests we can also strip the trailing empty fields :
s/\t+$//
In conclusion I'd use the following :
sed -E 's/"/\\"/g;s/\t+$//;s/^/"/;s/\t/","/g;s/$/"/'
which you can try here.
Either replace tabs with "," and enclose lines between double quotes, or replace commas with spaces and tabs with commas. In both cases you'll get valid CSV.
$ cat file
name location 1,2,3,4,5
$
$ sed 's/\t/","/g; s/^\|$/"/g' file
"name","location","1,2,3,4,5"
$
$ sed 's/,/ /g; s/\t/,/g' file
name,location,1 2 3 4 5
And in awk:
$ awk -v OFS="," '{for(i=1;i<=NF;i++)if($i~/,/)$i="\"" $i "\"";$1=$1}1' file
name,location,"1,2,3,4,5"
Explained:
$ awk -v OFS="," '{ # output delimiter to a comma *
for(i=1;i<=NF;i++) # loop all fields
if($i~/,/) # if comma in field
$i="\"" $i "\"" # surround with quotes **
$1=$1 # rebuild record
}1' file # output
* if there is space in the record, consider input field separator to a tab with awk -F"\t".
** also, if there are quotes in the fields with commas, maybe they should be duplicated or escaped.
Depending on your real requirements:
$ awk -F'\t' -v OFS=',' '{for (i=1;i<=NF;i++) $i="\""$i"\""} 1' file
"name","location","1,2,3,4,5"
$ awk -F'\t' -v OFS=',' '{for (i=1;i<=NF;i++) gsub(OFS," ",$i); $1=$1} 1' file
name,location,1 2 3 4 5
$ awk -F'\t' -v OFS=',' '{for (i=1;i<=NF;i++) gsub(OFS," ",$i); $(NF+1)=""} 1' file
name,location,1 2 3 4 5,
$ echo 'a"b' | awk -F'\t' -v OFS=',' '{for (i=1;i<=NF;i++) { gsub(/"/,"\"\"",$i); $i="\""$i"\"" } } 1'
"a""b"
sed 's/\t/","/g; s/^\|$/"/g' file
doesn't work in MacOS
Instead use
sed 's/\t/","/g;s/^/"/;s/$/"/' file for MacOS.
I have issue with sed, i need to accomplish two things with a csv file
in front of each line that does not start UNES I need to add tag "BF2;"
at the start of the file (after UNES if present) I need to add a tag "UNH;"
Example (no UNES;)
50000024;IE15;041111;113901;verstuurd;Aangift;
50000024;IE15;041111;113901;verstuurd;Aangifte;
50000024;IE15;041111;113901;verstuurd;Aangifte;
Example (with UNES;)
UNES;
50000024;IE15;041111;113901;verstuurd;Aangift;
50000024;IE15;041111;113901;verstuurd;Aangifte;
50000024;IE15;041111;113901;verstuurd;Aangifte;
so far I have this:
sed -e 's/^\([^"UNES"]\)/BF2;\1/' | sed '/UNES/ a\UNH;'
THis works as long as a UNES; tag is present - I can't seem to figure out how to insert the UNH; when UNES is not present!
Any help much appreciated
Sample output:
UNES;
UNH;
BF2;50000024;IE15;041111;113901;verstuurd;Aangifte;
BF2;50000024;IE15;041111;113901;verstuurd;Aangifte;
BF2;50000024;IE15;041111;113901;verstuurd;Aangifte;
Here's how you could do it using awk:
awk 'NR==1 {if(f=/^UNES;/)print; print "UNH;"} !f{print "BF2;" $0} {f=0}' file
On the first line, if /^UNES;/ is matched, print it and set the flag f. Always print "UNH;". If the f flag has been set, don't do the next action, which works for the rest of the lines. Always reset f to 0 after the first line so all further lines have "BF2;" added to the start.
Testing it out:
$ cat file
UNES;
50000024;IE15;041111;113901;verstuurd;Aangift;
50000024;IE15;041111;113901;verstuurd;Aangifte;
50000024;IE15;041111;113901;verstuurd;Aangifte;
$ awk 'NR==1 {if(f=/^UNES;/)print; print "UNH;"} !f{print "BF2;" $0} {f=0}' file
UNES;
UNH;
BF2;50000024;IE15;041111;113901;verstuurd;Aangift;
BF2;50000024;IE15;041111;113901;verstuurd;Aangifte;
BF2;50000024;IE15;041111;113901;verstuurd;Aangifte;
$ cat file2
50000024;IE15;041111;113901;verstuurd;Aangift;
50000024;IE15;041111;113901;verstuurd;Aangifte;
50000024;IE15;041111;113901;verstuurd;Aangifte;
$ awk 'NR==1 {if(f=/^UNES;/)print; print "UNH;"} !f{print "BF2;" $0} {f=0}' file2
UNH;
BF2;50000024;IE15;041111;113901;verstuurd;Aangift;
BF2;50000024;IE15;041111;113901;verstuurd;Aangifte;
BF2;50000024;IE15;041111;113901;verstuurd;Aangifte;
You can use this sed command:
sed '/^UNES;$/{i\
UNH;
n};s/^/BF2;/;' file.txt
details:
/^UNES;$/i\
UNH; insert a new line when UNES; is the whole line.
n replaces the pattern space with the next line
Try this, its works for me
sed '/^UNES;$/{i\
UNH;
n};s/^[0-9]*/BF2;&/;'
I have 'file1' with (say) 100 lines. I want to use sed or awk to print lines 23, 71 and 84 (for example) to 'file2'. Those 3 line numbers are in a separate file, 'list', with each number on a separate line.
When I use either of these commands, only line 84 gets printed:
for i in $(cat list); do sed -n "${i}p" file1 > file2; done
for i in $(cat list); do awk 'NR==x {print}' x=$i file1 > file2; done
Can a for loop be used in this way to supply line addresses to sed or awk?
This might work for you (GNU sed):
sed 's/.*/&p/' list | sed -nf - file1 >file2
Use list to build a sed script.
You need to do > after the loop in order to capture everything. Since you are using it inside the loop, the file gets overwritten. Inside the loop you need to do >>.
Good practice is to or use > outside the loop so the file is not open for writing during every loop iteration.
However, you can do everything in awk without for loop.
awk 'NR==FNR{a[$1]++;next}FNR in a' list file1 > file2
You have to >>(append to the file) . But you are overwriting the file. That is why, You are always getting 84 line only in the file2.
Try use,
for i in $(cat list); do sed -n "${i}p" file1 >> file2; done
With sed:
sed -n $(sed -e 's/^/-e /' -e 's/$/p/' list) input
given the example input, the inner command create a string like this: `
-e 23p
-e 71p
-e 84p
so the outer sed then prints out given lines
You can avoid running sed/awk in a for/while loop altgether:
# store all lines numbers in a variable using pipe
lines=$(echo $(<list) | sed 's/ /|/g')
# print lines of specified line numbers and store output
awk -v lineS="^($lines)$" 'NR ~ lineS' file1 > out
I have a file with three columns. I would like to delete the 3rd column(in-place editing). How can I do this with awk or sed?
123 abc 22.3
453 abg 56.7
1236 hjg 2.3
Desired output
123 abc
453 abg
1236 hjg
try this short thing:
awk '!($3="")' file
With GNU awk for inplace editing, \s/\S, and gensub() to delete
1) the FIRST field:
awk -i inplace '{sub(/^\S+\s*/,"")}1' file
or
awk -i inplace '{$0=gensub(/^\S+\s*/,"",1)}1' file
2) the LAST field:
awk -i inplace '{sub(/\s*\S+$/,"")}1' file
or
awk -i inplace '{$0=gensub(/\s*\S+$/,"",1)}1' file
3) the Nth field where N=3:
awk -i inplace '{$0=gensub(/\s*\S+/,"",3)}1' file
Without GNU awk you need a match()+substr() combo or multiple sub()s + vars to remove a middle field. See also Print all but the first three columns.
This might work for you (GNU sed):
sed -i -r 's/\S+//3' file
If you want to delete the white space before the 3rd field:
sed -i -r 's/(\s+)?\S+//3' file
It seems you could simply go with
awk '{print $1 " " $2}' file
This prints the two first fields of each line in your input file, separated with a space.
Try using cut... its fast and easy
First you have repeated spaces, you can squeeze those down to a single space between columns if thats what you want with tr -s ' '
If each column already has just one delimiter between it, you can use cut -d ' ' -f-2 to print fields (columns) <= 2.
for example if your data is in a file input.txt you can do one of the following:
cat input.txt | tr -s ' ' | cut -d ' ' -f-2
Or if you better reason about this problem by removing the 3rd column you can write the following
cat input.txt | tr -s ' ' | cut -d ' ' --complement -f3
cut is pretty powerful, you can also extract ranges of bytes, or characters, in addition to columns
excerpt from the man page on the syntax of how to specify the list range
Each LIST is made up of one range, or many ranges separated by commas.
Selected input is written in the same order that it is read, and is
written exactly once. Each range is one of:
N N'th byte, character or field, counted from 1
N- from N'th byte, character or field, to end of line
N-M from N'th to M'th (included) byte, character or field
-M from first to M'th (included) byte, character or field
so you also could have said you want specific columns 1 and 2 with...
cat input.txt | tr -s ' ' | cut -d ' ' -f1,2
Try this :
awk '$3="";1' file.txt > new_file && mv new_file file.txt
or
awk '{$3="";print}' file.txt > new_file && mv new_file file.txt
Try
awk '{$3=""; print $0}'
If you're open to a Perl solution...
perl -ane 'print "$F[0] $F[1]\n"' file
These command-line options are used:
-n loop around every line of the input file, do not automatically print every line
-a autosplit mode – split input lines into the #F array. Defaults to splitting on whitespace
-e execute the following perl code