I'm trying to get a Perl one-liner working:
$ perl -aln -F"\t" -i -e 'BEGIN{print qq(taxid:int:ncbitaxid\tname\tl:label)} print qq($F[0]\t$F[1]\trank,$F[2])' testing
The input file for testing looks like this:
1 root no rank
2 Bacteria superkingdom
6 Azorhizobium genus
7 Azorhizobium caulinodans species
9 Buchnera aphidicola species
10 Cellvibrio genus
11 [Cellvibrio] gilvus species
13 Dictyoglomus genus
14 Dictyoglomus thermophilum species
16 Methylophilus genus
The desired output looks like this:
taxid:int:ncbitaxid name l:label
1 root rank,no rank
2 Bacteria rank,superkingdom
6 Azorhizobium rank,genus
7 Azorhizobium caulinodans rank,species
9 Buchnera aphidicola rank,species
10 Cellvibrio rank,genus
11 [Cellvibrio] gilvus rank,species
13 Dictyoglomus rank,genus
14 Dictyoglomus thermophilum rank,species
16 Methylophilus rank,genus
I've been able to recreate this using the following, but I want to edit in place, not print to another file.
perl -aln -F"\t" -e 'BEGIN{print qq(taxid:int:ncbitaxid\tname\tl:label)} print qq($F[0]\t$F[1]\trank,$F[2])' testing
See perlrun, the -i switch for in-place editing.
See perlvar, $. line number for the last filehandle accessed.
perl -aln -i.bak -F"\t" -e 'print qq(taxid:int:ncbitaxid\tname\tl:label) if $.==1; \
print qq($F[0]\t$F[1]\trank,$F[2])' testing
Use the -i switch for in place editing:
perl -i -aln -F"\t" -e'
print qq(taxid:int:ncbitaxid\tname\tl:label) if $. ==1;
print qq($F[0]\t$F[1]\trank,$F[2])
' testing
-i[extension]
specifies that files processed by the <> construct are to be edited in-place. It does this by renaming the input file, opening the output file by the original name, and selecting that output file as the default for print() statements.
Related
The next command in sed skips to the next line, but with multiple files there doesn't seem to be any command to skip to the next file.
Is there any workaround using only a single invocation of sed?
Demonstration of problem...
Make two simple 3-number data files:
seq 3 > three ; seq 10 1 13 > thirteen
Show that sed handles multiple files, (by finding all lines ending with 3 and printing the filenames), and is somewhat aware of them as distinct objects:
sed -n '/3$/{p;F}' three thirteen
Output:
3
three
13
thirteen
This next attempt to print both last lines doesn't work however, or rather it works as though both files were a single stream:
sed -n '$p' three thirteen
Output:
13
See if your version supports the -s option:
$ seq 3 > three ; seq 10 1 13 > thirteen
$ sed -n '$p' three thirteen
13
$ sed -n '2p' three thirteen
2
$ sed -sn '$p' three thirteen
3
13
$ sed -sn '2p' three thirteen
2
11
From man sed:
-s, --separate
consider files as separate rather than as a single continuous long stream.
When using the -i option, GNU sed uses -s by default.
In case the -s option is not available, here's an alternative with perl:
$ perl -ne 'print if eof' three thirteen
3
13
So I have a txt file where I need to extract every third number and print it to separate file using Terminal. The txt file is just a long list of numbers, tab delimited:
18 25 0 18 24 5 18 23 5 18 22 8.2 ...
I know there is a way to do this using sed or awk, but so far I've only been able to extract every third line by using:
awk 'NR%3==1' testRain.txt > rainOnly.txt
So here's the answer (or rather, the answer I utilized!):
xargs -n1 < input.txt | awk '!(NR%3)' > output.txt
This gives you an output.txt that has every third number of the original file as a separate line.
A quick pipe line to extract every 3rd number:
$ xargs -n1 < file | sed '3~3!d'
0
5
5
8.2
If you don't want each number on a newline throw the result back through xargs:
$ xargs -n1 < file | sed '3~3!d' | xargs
0 5 5 8.2
Use redirection to store the output in a new file:
$ xargs -n1 < file | sed '3~3!d' | xargs > new_file
With awk using a simple for loop you could do:
$ awk '{for(i=3;i<=NF;i+=3)print $i}' file
0
5
5
8.2
or (adds a trailing tab):
$ awk '{for(i=3;i<=NF;i+=3)printf "%s\t",$i;print ""}' file
0 5 5 8.2
Or by setting the value of RS (adds trailing newline):
$ awk '!(NR%3)' RS='\t' file
0
5
5
8.2
$ awk '!(NR%3)' RS='\t' ORS='\t' file
0 5 5 8.2
You can print every third character by substituting the next two with nothing, globally. When the count straddles a newline, using Perl might be the simplest solution:
perl -p000 -e 's/(.)../$1/gs'
If you want the first, fourth etc character from every line, a line-oriented tool like sed suffices:
sed 's/\(.\)../\1/g'
Using grep -P
grep -oP '([^\t]+\t){2}\K[^\t\n]+' file
0
5
5
8.2
This might work for you (GNU sed):
sed -r 's/(\S+\s){3}/\1/g;s/\s$//' file
#user2718946
Your solution was close, but here you are without xarg.
awk 'NR%3==1' RS=" " file
18
18
18
18
Different start:
awk 'NR%3==0' RS=" " file
0
5
5
8.2
Is fairly easy to strip the first and last character from a string using awk/sed?
Say I have this string
( 1 2 3 4 5 6 7 )
I would like to strip parentheses from it.
How should I do this?
sed way
$ echo '( 1 2 3 4 5 6 7 )' | sed 's/^.\(.*\).$/\1/'
1 2 3 4 5 6 7
awk way
$ echo '( 1 2 3 4 5 6 7 )' | awk '{print substr($0, 2, length($0) - 2)}'
1 2 3 4 5 6 7
POSIX sh way
$ var='( 1 2 3 4 5 6 7 )'; var="${var#?}"; var="${var%?}"; echo "$var"
1 2 3 4 5 6 7
bash way
$ var='( 1 2 3 4 5 6 7 )'; echo "${var:1: -1}"
1 2 3 4 5 6 7
If you use bash then use the bash way.
If not, prefer the posix-sh way. It is faster than loading sed or awk.
Other than that, you may also be doing other text processing, that you can combine with this, so depending on the rest of the script you may benefit using sed or awk in the end.
why doesn't this work? sed '..' s_res.temp > s_res.temp ?
This does not work, as the redirection > will truncate the file before it is read.
To solve this you have some choices:
what you really want to do is edit the file. sed is a stream editor not a file editor.
ed though, is a file editor (the standard one too!). So, use ed:
$ printf '%s\n' "%s/^.\(.*\).$/\1/" "." "wq" | ed s_res.temp
use a temporary file, and then mv it to replace the old one.
$ sed 's/^.\(.*\).$/\1/' s_res.temp > s_res.temp.temp
$ mv s_res.temp.temp s_res.temp
use -i option of sed. This only works with GNU-sed, as -i is not POSIX and GNU-only:
$ sed -i 's/^.\(.*\).$/\1/' s_res.temp
abuse the shell (not recommended really):
$ (rm test; sed 's/XXX/printf/' > test) < test
On Mac OS X (latest version 10.12 - Sierra) bash is stuck to version 3.2.57 which is quite old. One can always install bash using brew and get version 4.x which includes the substitutions needed for the above to work.
There is a collection of bash versions and respective changes, compiled on the bash-hackers wiki
To remove the first and last characters from a given string, I like this sed:
sed -e 's/^.//' -e 's/.$//'
# ^^ ^^
# first char last char
See an example:
sed -e 's/^.//' -e 's/.$//' <<< "(1 2 3 4 5 6 7)"
1 2 3 4 5 6 7
And also a perl way:
perl -pe 's/^.|.$//g'
If I want to remove the First (1) character and the last two (2) characters using sed.
Input "t2.large",
Output t2.large
sed -e 's/^.//' -e 's/..$//'
`
I've got a busybox system which doesn't have uniq and I'd like to generate a unique list of duplicated lines.
A plain uniq emulated in awk would be:
sort <filename> | awk '!($0 in a){a[$0]; print}'
How can I use awk (or sed for that matter, not perl) to accomplish:
sort <filename> | uniq -d
On a busybox system, you might need to save bytes. ;-)
awk ++a[\$0]==2
Could do this (needn't sort it):
awk '{++a[$0]; if(a[$0] == 2) print}'
This might work for you:
# make some test data
seq 25 >/tmp/a
seq 3 3 25 >>/tmp/a
seq 5 5 25 >>/tmp/a
# run old command
sort -n /tmp/a | uniq -d
3
5
6
9
10
12
15
18
20
21
24
25
# run sed command
sort -n /tmp/a |
sed ':a;$bb;N;/^\([^\n]*\)\(\n\1\)*$/ba;:b;/^\([^\n]*\)\(\n\1\)*/{s//\1/;P};D'
3
5
6
9
10
12
15
18
20
21
24
25
I have a file like this:
1 2 3
4 5 6
7 6 8
9 6 3
4 4 4
What are some one-liners that can output unique elements of the nth column to another file?
EDIT: Here's a list of solutions people gave. Thanks guys!
cat in.txt | cut -d' ' -f 3 | sort -u
cut -c 1 t.txt | sort -u
awk '{ print $2 }' cols.txt | uniq
perl -anE 'say $F[0] unless $h{$F[0]}++' filename
In Perl before 5.10
perl -lane 'print $F[0] unless $h{$F[0]}++' filename
In Perl after 5.10
perl -anE 'say $F[0] unless $h{$F[0]}++' filename
Replace 0 with the column you want to output.
For j_random_hacker, here is an implementation that will use very little memory (but will be a slower and requires more typing):
perl -lane 'BEGIN {dbmopen %h, "/tmp/$$", 0600; unlink "/tmp/$$.db" } print $F[0] unless $h{$F[0]}++' filename
dbmopen creates an interface between a DBM file (that it creates or opens) and the hash named %h. Anything stored in %h will be stored on disc instead of in memory. Deleting the file with unlink ensures that the file will not stick around after the program is done, but has no effect on the current process (since, according to POSIX rules, open filehandles are respected by the filesystem as real files).
Corrected: Thank you Mark Rushakoff.
$ cut -c 1 t.txt | sort | uniq
or
$ cut -c 1 t.txt | sort -u
1
4
7
9
Taking the unique values of the third column:
$ cat in.txt | cut -d' ' -f 3 | sort -u
3
4
6
8
cut -d' ' means to separate the input delimited by spaces, and the -f 3 part means take the third field. Finally, sort -u sorts the output, keeping only unique entries.
Say your file is "cols.txt" and you want the unique elements of the second column:
awk '{ print $2 }' cols.txt | uniq
You might find the following article useful for learning more about such utilities:
Simplify data extraction using Linux text utilities
if using awk, no need to use other commands
awk '!_[$2]++{print $2}' file