I have a file with multiple spaces and i am replacing the spaces with only a single space using :
system "sed -i -e 's/[[:space:]]\\+/ /g' /home/donovan/Documents/NWPMIK.txt";
How can i now go and remove any spaces after the third space?
You can use perl's auto-splitting feature for this:
perl -lane 'push #F, join("", splice(#F,3)); print join " ", #F'
Example:
% echo 'abc def ghi jkl mno pqr' | perl -lane 'push #F, join("", splice(#F,3)); print join " ", #F'
abc def ghi jklmnopqr
This perl on-liner will remove any space after the 3rd space. What it actually does is replace every sequence of at least 3 spaces with just 3 spaces and write the results to a new file :
perl -pe 's/\s{3,}/ /g' /home/donovan/Documents/NWPMIK.txt > /home/donovan/Documents/NWPMIK_new.txt
If you are looking to update the file in-place, then :
perl -pi -e 's/\s{3,}/ /g' /home/donovan/Documents/NWPMIK.txt
Related
Working with the example log file below:
1;000117;20190529;055529;9521;0988388019
1;000015;20190529;071944;2222;2231
1;000012;20190529;072734;4258;4252
1;000006;20190529;073336;2226;1000
3;000005;20190529;073715;1000;037760967
3;000004;20190529;073751;1000;037760967
I need to normalize the last column filling with spaces until they has the lenght = 25
Tryed with unsuccessful perl code:
perl -F';' -lane '$F[5] = $F[5], sprintf "% 25d"; $" = ";"; print "#F"'
I need the output below:
1;000117;20190529;055529;9521;0988388019
1;000015;20190529;071944;2222;2231
1;000012;20190529;072734;4258;4252
1;000006;20190529;073336;2226;1000
3;000005;20190529;073715;1000;037760967
3;000004;20190529;073751;1000;037760967
$ awk 'BEGIN{FS=OFS=";"} {$NF=sprintf("%-25s",$NF)}1' file
1;000117;20190529;055529;9521;0988388019
1;000015;20190529;071944;2222;2231
1;000012;20190529;072734;4258;4252
1;000006;20190529;073336;2226;1000
3;000005;20190529;073715;1000;037760967
3;000004;20190529;073751;1000;037760967
So you can see the blanks:
$ awk 'BEGIN{FS=OFS=";"} {$NF=sprintf("%-25s",$NF)}1' file | tr ' ' '#'
1;000117;20190529;055529;9521;0988388019###############
1;000015;20190529;071944;2222;2231#####################
1;000012;20190529;072734;4258;4252#####################
1;000006;20190529;073336;2226;1000#####################
3;000005;20190529;073715;1000;037760967################
3;000004;20190529;073751;1000;037760967################
You were on the right track. More successful Perl codes:
perl -F';' -lane '$F[5]=sprintf("%-25s",$F[5]);print join ";",#F'
perl -F';' -pane '$F[5]=sprintf("%-25s",$F[5]);$_=join ";",#F'
This might work for you (GNU sed):
sed -i ':a;/;[^;]\{25\}$/!s/$/ /;ta' file
If the last field is not 25 characters long, add a space until it is.
I have a problem with replacing string.
|Stm=2|Seq=2|Num=2|Svc=101|MsgSize(514)=514|MsgType=556|SymbolIndex=16631
I want to find occurrence of Svc till | appears and swap place with Stm till | appears.
My attempts went to replacing characters and this is not my goal.
awk -F'|' -v OFS='|'
'{a=b=0;
for(i=1;i<=NF;i++){a=$i~/^Stm=/?i:a;b=$i~/^Svc=/?i:b}
t=$a;$a=$b;$b=t}7' file
outputs:
|Svc=101|Seq=2|Num=2|Stm=2|MsgSize(514)=514|MsgType=556|SymbolIndex=16631
the code exchange the column of Stm.. and Svc.., no matter which one comes first.
If perl solution is okay, assumes only one column matches each for search terms
$ cat ip.txt
|Stm=2|Seq=2|Num=2|Svc=101|MsgSize(514)=514|MsgType=556|SymbolIndex=16631
$ perl -F'\|' -lane '
#i = grep { $F[$_] =~ /Svc|Stm/ } 0..$#F;
$t=$F[$i[0]]; $F[$i[0]]=$F[$i[1]]; $F[$i[1]]=$t;
print join "|", #F;
' ip.txt
|Svc=101|Seq=2|Num=2|Stm=2|MsgSize(514)=514|MsgType=556|SymbolIndex=16631
-F'\|' -lane split input line on |, see also Perl flags -pe, -pi, -p, -w, -d, -i, -t?
#i = grep { $F[$_] =~ /Svc|Stm/ } 0..$#F get index of columns matching Svc and Stm
$t=$F[$i[0]]; $F[$i[0]]=$F[$i[1]]; $F[$i[1]]=$t swap the two columns
Or use ($F[$i[0]], $F[$i[1]]) = ($F[$i[1]], $F[$i[0]]); courtesy How can I swap two Perl variables
print join "|", #F print the modified array
You need to use capture groups and backreferences in a string substition.
The below will swap the 2:
echo '|Stm=2|Seq=2|Num=2|Svc=101|MsgSize(514)=514|MsgType=556|SymbolIndex=16631' | sed 's/\(Stm.*|\)\(.*\)\(Svc.*|\)/\3\2\1/'
As pointed out in the comment from #Kent, this will not work if the strings were not in that order.
I have a file that has around 500 rows and 480K columns, I am required to move columns 2,3 and 4 at the end. My file is a comma separated file, is there a quicker way to arrange this using awk or sed?
You can try below solution -
perl -F"," -lane 'print "#F[0]"," ","#F[4..$#F]"," ","#F[1..3]"' input.file
You can copy the columns easily, moving will take too long for 480K columns.
$ awk 'BEGIN{FS=OFS=","} {print $0,$2,$3,$4}' input.file > output.file
what kind of a data format is this?
Another technique, just bash:
while IFS=, read -r a b c d e; do
echo "$a,$e,$b,$c,$d"
done < file
Testing with 5 fields:
$ cat foo
1,2,3,4,5
a,b,c,d,e
$ cat program.awk
{
$6=$2 OFS $3 OFS $4 OFS $1 # copy fields to the end and $1 too
sub(/^([^,],){4}/,"") # remove 4 first columns
$1=$5 OFS $1 # catenate current $5 (was $1) to $1
NF=4 # reduce NF
} 1 # print
Run it:
$ awk -f program.awk FS=, OFS=, foo
1,5,2,3,4
a,e,b,c,d
So theoretically this should work:
{
$480001=$2 OFS $3 OFS $4 OFS $1
sub(/^([^,],){4}/,"")
$1=$480000 OFS $1
NF=479999
} 1
EDIT: It did work.
Perhaps perl:
perl -F, -lane 'print join(",", #F[0,4..$#F,1,2,3])' file
or
perl -F, -lane '#x = splice #F, 1, 3; print join(",", #F, #x)' file
Another approach: regular expressions
perl -lpe 's/^([^,]+)(,[^,]+,[^,]+,[^,]+)(.*)/$1$3$2/' file
Timing it with a 500 line file, each line containing 480,000 fields
$ time perl -F, -lane 'print join(",", #F[0,4..$#F,1,2,3])' file.csv > file2.csv
40.13user 1.11system 0:43.92elapsed 93%CPU (0avgtext+0avgdata 67960maxresident)k
0inputs+3172752outputs (0major+16088minor)pagefaults 0swaps
$ time perl -F, -lane '#x = splice #F, 1, 3; print join(",", #F, #x)' file.csv > file2.csv
34.82user 1.18system 0:38.47elapsed 93%CPU (0avgtext+0avgdata 52900maxresident)k
0inputs+3172752outputs (0major+12301minor)pagefaults 0swaps
And pure text manipulation is the winner
$ time perl -lpe 's/^([^,]+)(,[^,]+,[^,]+,[^,]+)(.*)/$1$3$2/' file.csv > file2.csv
4.63user 1.36system 0:20.81elapsed 28%CPU (0avgtext+0avgdata 20612maxresident)k
0inputs+3172752outputs (0major+149866minor)pagefaults 0swaps
I have a special file with this kind of format :
title1
_1 texthere
title2
_2 texthere
I would like all newlines starting with "_" to be placed as a second column to the line before
I tried to do that using sed with this command :
sed 's/_\n/ /g' filename
but it is not giving me what I want to do (doing nothing basically)
Can anyone point me to the right way of doing it ?
Thanks
Try following solution:
In sed the loop is done creating a label (:a), and while not match last line ($!) append next one (N) and return to label a:
:a
$! {
N
b a
}
After this we have the whole file into memory, so do a global substitution for each _ preceded by a newline:
s/\n_/ _/g
p
All together is:
sed -ne ':a ; $! { N ; ba }; s/\n_/ _/g ; p' infile
That yields:
title1 _1 texthere
title2 _2 texthere
If your whole file is like your sample (pairs of lines), then the simplest answer is
paste - - < file
Otherwise
awk '
NR > 1 && /^_/ {printf "%s", OFS}
NR > 1 && !/^_/ {print ""}
{printf "%s", $0}
END {print ""}
' file
This might work for you (GNU sed):
sed ':a;N;s/\n_/ /;ta;P;D' file
This avoids slurping the file into memory.
or:
sed -e ':a' -e 'N' -e 's/\n_/ /' -e 'ta' -e 'P' -e 'D' file
A Perl approach:
perl -00pe 's/\n_/ /g' file
Here, the -00 causes perl to read the file in paragraph mode where a "line" is defined by two consecutive newlines. In your example, it will read the entire file into memory and therefore, a simple global substitution of \n_ with a space will work.
That is not very efficient for very large files though. If your data is too large to fit in memory, use this:
perl -ne 'chomp;
s/^_// ? print "$l " : print "$l\n" if $. > 1;
$l=$_;
END{print "$l\n"}' file
Here, the file is read line by line (-n) and the trailing newline removed from all lines (chomp). At the end of each iteration, the current line is saved as $l ($l=$_). At each line, if the substitution is successful and a _ was removed from the beginning of the line (s/^_//), then the previous line is printed with a space in place of a newline print "$l ". If the substitution failed, the previous line is printed with a newline. The END{} block just prints the final line of the file.
my textfile looks like this
foo.en 14 :: xyz 1;foo bar 2;foofoo 5;bar 9
bar.es 18 :: foo bar 4;kjp bar 2;bar 6;barbar 8
Ignoring text before the :: delimiter, is there a one liner unix command (many pipes allowed) or one liner perl script that extract the text such that yields the output of unique words delimited by ; ?:
xyz
foo bar
foofoo
bar
kjp bar
barbar
i've tried looping through the textfile with a python script but i'm looking for a one-liner for the task.
ans = set()
for line in open(textfile):
ans.add(line.partition(" :: ")[1].split(";").split(" ")[:-1])
for a in ans:
print a
With Perl:
perl -nle 's/.*?::\s*//;!$s{$_}++ and print for split /\s*\d+;?/' input
Description:
s/.*?::\s*//; # delete up to the first '::'
This part:
!$s{$_}++ and print for split /\s*\d+;?/
can be rewritten like this:
foreach my $word (split /\s*\d+;?/) { # for split /\s*\d+;?/
if (not defined $seen{$word}}) { # !$s{$_}
print $word; # and print
}
$seen{$word}++; # $s{$_}++
}
Since the increment in !$s{$_}++ is a post increment, Perl first test for the false condition and then does the increment. An undefined hash value has the value 0. If the test fails, i.e., $s{$_} was previously incremented, then the and part is skipped due to short circuiting.
cat textfile | sed 's/.*:://g' | tr '[0-9]*;' '\n' | sort -u
Explanation:
sed 's/.*:://g' Take everything up to and including `::` and replace it with nothing
tr '[0-9];' '\n' Replace numbers and semicolon with newlines
sort -u Sort, and return unique instances
it does result in a sorted output, I believe...
You can try this:
$ awk -F ' :: ' '{print $2}' input.txt | grep -oP '[^0-9;]+' | sort -u
bar
barbar
foo bar
foofoo
kjp bar
xyz
If your phrases contains numbers, try this perl regex: '[^;]+?(?=\s+\d+(;|$))'
With only awk :
$ awk -F' :: ' '{
gsub(/[0-9]+/, "")
split($2, arr, /;/ )
for (a in arr) arr2[arr[a]]=""
}
END{
for (i in arr2) print i
}' textfile.txt
And a one-liner version :
awk -F' :: ' '{gsub(/[0-9]+/, "");split($2, arr, /;/ );for (a in arr) arr2[arr[a]]="";}END{for (i in arr2) print i}' textfile.txt