Related
I hope you're having a great day,
I want to remove two patterns, I want to remove the parts that contains the word images from a text that I have:
in the files test1 I have this:
APP:Server1:files APP:Server2:images APP:Server3:misc APP:Server4:xml APP:Server5:json APP:Server6:stats APP:Server7:graphs APP:Server8:images-v2
I need to remove APP:Server2:image and APP:Server8:images-v2 ... I want this output:
APP:Server1:files APP:Server3:misc APP:Server4:xml APP:Server5:json APP:Server6:stats APP:Server7:graphs
I'm trying this:
cat test1 | sed 's/ .*images.* / /g'
You need to make sure that your wildcards do not allow spaces:
cat data | sed 's/ [^ ]*image[^ ]* / /g'
This should work for you
sed 's/\w{1,}:Server[2|8]:\w{1,} //g'
\w matches word characters (letters, numbers, _)
{1,} matches one or more of the preceeding item (\w)
[2|8] matches either the number 2 or 8
cat test.file
APP:Server1:files APP:Server2:images APP:Server3:misc APP:Server4:xml APP:Server5:json APP:Server6:stats APP:Server7:graphs APP:Server8:images-v2
The below command removes the matching lines and leaves blanks in their place
tr ' ' '\n' < test.file |sed 's/\w\{1,\}:Server[2|8]:\w\{1,\}.*$//'
APP:Server1:files
APP:Server3:misc
APP:Server4:xml
APP:Server5:json
APP:Server6:stats
APP:Server7:graphs
To remove the blank lines, just add a second option to the sed command, and paste the contents back together
tr ' ' '\n' < test.file |sed 's/\w\{1,\}:Server[2|8]:\w\{1,\}.*$//;/^$/d'|paste -sd ' ' -
APP:Server1:files APP:Server3:misc APP:Server4:xml APP:Server5:json APP:Server6:stats APP:Server7:graphs
GNU aWk alternative:
awk 'BEGIN { RS="APP:" } $0=="" { next } { split($0,map,":");if (map[2] ~ /images/ ) { next } OFS=RS;printf " %s%s",OFS,$0 }'
Set the record separator to "APP:" and then process the text in between as separate records. If the record is blank, skip to the next record. Split the record into array map based on ":" as the delimiter, then check if there is image in the any of the text in the second index. If there is, skip to the next record, otherwise print along with the record separator.
I am trying to consolidate an email list, but I want to uniq (or uniq -i -u) by the email address, not the entire line so that we don't have duplicates.
list 1:
Company A <companya#companya.com>
Company B <companyb#companyb.com>
Company C <companyc#companyc.com>
list 2:
firstname lastname <firstname#gmail.com>
Fake Person <companyb#companyb.com>
Joe lastnanme <joe#gmail.com>
the current output is
Company A <companya#companya.com>
Company B <companyb#companyb.com>
Company C <companyc#companyc.com>
firstname lastname <firstname#gmail.com>
Fake Person <companyb#companyb.com>
Joe lastnanme <joe#gmail.com>
the desired output would be
Company A <companya#companya.com>
Company B <companyb#companyb.com>
Company C <companyc#companyc.com>
firstname lastname <firstname#gmail.com>
Joe lastnanme <joe#gmail.com>
(as companyb#companyb.com is listed in both)
How can I do that?
given your file format
$ awk -F'[<>]' '!a[$2]++' files
will print the first instance of duplicate content in angled brackets. Or if there is no content after the email, you don't need to un-wrap the angled brackets
$ awk '!a[$NF]++' files
Same can be done with sort as well
$ sort -t'<' -k2,2 -u files
side-effect is output will be sorted which can be desired (or not).
N.B. For both alternatives the assumption is angled brackets don't appear anywhere else than the email wrappers.
Here is one in awk:
$ awk '
match($0,/[a-z0-9.]+#[a-z.]+/) { # look for emailish string *
a[substr($0,RSTART,RLENGTH)]=$0 # and hash the record using the address as key
}
END { # after all are processed
for(i in a) # output them in no particular order
print a[i]
}' file2 file1 # switch order to see how it affects output
Output
Company A <companya#companya.com>
Company B <companyb#companyb.com>
Company C <companyc#companyc.com>
Joe lastnanme <joe#gmail.com>
firstname lastname <firstname#gmail.com>
Script looks for very simple emailish string (* see the regex in the script and tune it to your liking) which it uses to hash the whole records,last instance wins as the earlier onse are overwritten.
uniq has an -f option to ignore a number of blank-delimited fields, so we can sort on the third field and then ignore the first two:
$ sort -k 3,3 infile | uniq -f 2
Company A <companya#companya.com>
Company B <companyb#companyb.com>
Company C <companyc#companyc.com>
firstname lastname <firstname#gmail.com>
Joe lastnanme <joe#gmail.com>
However, this isn't very robust: it breaks as soon as there aren't exactly two fields before the email address as the sorting will be on the wrong field and uniq will compare the wrong fields.
Check karakfa's answer to see how uniq isn't even required here.
Alternatively, just checking for uniqueness of the last field:
awk '!e[$NF] {print; ++e[$NF]}' infile
or even shorter, stealing from karakfa, awk '!e[$NF]++' infile
Could you please try following.
awk '
{
match($0,/<.*>/)
val=substr($0,RSTART,RLENGTH)
}
FNR==NR{
a[val]=$0
print
next
}
!(val in a)
' list1 list2
Explanation: Adding explanation of above code.
awk ' ##Starting awk program here.
{ ##Starting BLOCK which will be executed for both of the Input_files.
match($0,/<.*>/) ##Using match function of awk where giving regex to match everything from < to till >
val=substr($0,RSTART,RLENGTH) ##Creating variable named val whose value is substring of current line starting from RSTART to value of RLENGTH, basically matched string.
} ##Closing above BLOCK here.
FNR==NR{ ##Checking condition FNR==NR which will be TRUE when 1st Input_file named list1 will be read.
a[val]=$0 ##Creating an array named a whose index is val and value is current line.
print $0 ##Printing current line here.
next ##next will skip all further statements from here.
}
!(val in a) ##Checking condition if variable val is NOT present in array a if it is NOT present then do printing of current line.
' list1 list2 ##Mentioning Input_file names here.
Output will be as follows.
Company A <companya#companya.com>
Company B <companyb#companyb.com>
Company C <companyc#companyc.com>
firstname lastname <firstname#gmail.com>
Joe lastnanme <joe#gmail.com>
Perhaps I don't understand the question !
but you can try this awk :
awk 'NR!=FNR && $3 in a{next}{a[$3]}1' list1 list2
I have a csv file exported from spreadsheet which has, in the last column, sometimes a list of names. The file comes out like this:
ag,bd,cj,dy,"ss"
aa,bs,cs,fg,"name1
name2
name3
"
ff,ce,sd,de,
ag,bd,jj,ds,"ds"
fs,ee,sd,ee,"name4
name5
"
and so on.
I would like to remove the line feed in the last column between quotes so that the output is:
ag,bd,cj,dy,ss
aa,bs,cs,fg,"name1 name2 name3"
ff,ce,sd,de,
ag,bd,jj,ds,"ds"
fs,ee,sd,ee,"name4 name5"
Thanks
This awk may be one solution for you:
awk '/\"/ {s=!s} {printf "%s"(s?FS:RS),$0}'
ag,bd,cj,dy,ss
aa,bs,cs,fg,"name1 name2 name3 "
ff,ce,sd,de,df
New solution
awk -F\" 'NF==3; NF==2 {s++} s==1 {printf "%s ",$0} s==2 {print;s=0}' | awk '{sub(/ "/,"\"")}1' file
ag,bd,cj,dy,"ss"
aa,bs,cs,fg,"name1 name2 name3"
ag,bd,jj,ds,"ds"
fs,ee,sd,ee,"name4 name5"
I need to generate a file.sql file from a file.csv, so I use this command :
cat file.csv |sed "s/\(.*\),\(.*\)/insert into table(value1, value2)
values\('\1','\2'\);/g" > file.sql
It works perfectly, but when the values exceed 9 (for example for \10, \11 etc...) it takes consideration of only the first number (which is \1 in this case) and ignores the rest.
I want to know if I missed something or if there is another way to do it.
Thank you !
EDIT :
The not working example :
My file.csv looks like
2013-04-01 04:00:52,2,37,74,40233964,3860,0,0,4878,174,3,0,0,3598,27.00,27
What I get
insert into table
val1,val2,val3,val4,val5,val6,val7,val8,val9,val10,val11,val12,val13,val14,val15,val16
values
('2013-04-01 07:39:43',
2,37,74,36526530,3877,0,0,6080,
2013-04-01 07:39:430,2013-04-01 07:39:431,
2013-04-01 07:39:432,2013-04-01 07:39:433,
2013-04-01 07:39:434,2013-04-01 07:39:435,
2013-04-01 07:39:436);
After the ninth element I get the first one instead of the 10th,11th etc...
As far I know sed has a limitation of supporting 9 back references. It might have been removed in the newer versions (though not sure). You are better off using perl or awk for this.
Here is how you'd do in awk:
$ cat csv
2013-04-01 04:00:52,2,37,74,40233964,3860,0,0,4878,174,3,0,0,3598,27.00,27
$ awk 'BEGIN{FS=OFS=","}{print "insert into table values (\x27"$1"\x27",$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16 ");"}' csv
insert into table values ('2013-04-01 04:00:52',2,37,74,40233964,3860,0,0,4878,174,3,0,0,3598,27.00,27);
This is how you can do in perl:
$ perl -ple 's/([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+),([^,]+)/insert into table values (\x27$1\x27,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16);/' csv
insert into table values ('2013-04-01 04:00:52',2,37,74,40233964,3860,0,0,4878,174,3,0,0,3598,27.00,27);
Try an awk script (based on #JS웃 solution):
script.awk
#!/usr/bin/env awk
# before looping the file
BEGIN{
FS="," # input separator
OFS=FS # output separator
q="\047" # single quote as a variable
}
# on each line (no pattern)
{
printf "insert into table values ("
print q $1 q ", "
print $2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16
print ");"
}
Run with
awk -f script.awk file.csv
One-liner
awk 'BEGIN{OFS=FS=","; q="\047" } { printf "insert into table values (" q $1 q ", " $2","$3","$4","$5","$6","$7","$8","$9","$10","$11","$12","$13","$14","$15","$16 ");" }' file.csv
I have a file in stanza format. Example of the file are as below.
id_1:
id=241
pgrp=staff
groups=staff
home=/home/id_1
shell=/usr/bin/ks
id_2:
id=242
pgrp=staff
groups=staff
home=/home/id_2
shell=/usr/bin/ks
How do I use sed or awk to process it and return only the id name, id and groups in a single line and tab delimited format? e.g.:
id_1 241 staff
id_2 242 staff
with awk:
BEGIN { FS="="}
$1 ~ /id_/ { printf("%s", $1) }
$1 ~ /id/ && $1 !~ /_/ { printf("\t%s", $2) }
$1 ~ /groups/ { printf("\t%s\n", $2) }
Here is an awk solution:
translate.awk
#!/usr/bin/awk -f
{
if(match($1, /[^=]:[ ]*$/)){
id_=$1
sub(/:/,"",id_)
}
if(match($1,/id=/)){
split($1,p,"=")
id=p[2]
}
if(match($1,/groups=/)){
split($1,p,"=")
print id_," ",id," ",p[2]
}
}
Execute it either by:
chmod +x translated.awk
./translated.awk data.txt
or
awk -f translated.awk data.txt
For completeness, here comes a shortened version:
#!/usr/bin/awk -f
$1 ~ /[^=]:[ ]*$/ {sub(/:/,"",$1);printf $1" ";FS="="}
$1 ~ /id/ {printf $2" "}
$1 ~ /groups/ {print $2}
sed 'N;N;N;N;N;y/=\n/ /' data.txt | awk '{print $1,$3,$7}'
Here is the one-liner approach by setting RS:
awk 'NR>1{print "id_"++i,$3,$7}' RS='id_[0-9]+:' FS='[=\n]' OFS='\t' file
id_1 241 staff
id_2 242 staff
Requires GNU awk and assumes the IDs are in increasing order starting at 1.
If the ordering of the ID's is arbitrary:
awk '!/shell/&&NR>1{gsub(/:/,"",$1);print "id_"$1,$3,$5}' RS='id_' FS='[=\n]' OFS='\t' file
id_1 241 staff
id_2 242 staff
awk -F"=" '/id_/{split($0,a,":");}/id=/{i=$2}/groups/{printf a[1]"\t"i"\t"$2"\n"}' your_file
tested below:
> cat temp
id_1:
id=241
pgrp=staff
groups=staff
home=/home/id_1
shell=/usr/bin/ks
id_2:
id=242
pgrp=staff
groups=staff
home=/home/id_2
shell=/usr/bin/ks
> awk -F"=" '/id_/{split($0,a,":");}/id=/{i=$2}/groups/{printf a[1]"\t"i"\t"$2"\n"}' temp
id_1 241 staff
id_2 242 staff
This might work for you (GNU sed):
sed -rn '/^[^ :]+:/{N;N;N;s/:.*id=(\S+).*groups=(\S+).*/\t\1\t\2/p}' file
Look for a line holding an id then get the next 3 lines and re-arrange the output.