column -t is amazing with one nit: How can I change how many spaces are output between columns? I want one. column -t gives two. For example,
echo -en '111 22 3\n4 555 66\n' | column -t
outputs
111 22 3
4 555 66
but I would like it to output the following:
111 22 3
4 555 66
I think I could run the output through sed regex to turn two spaces followed by a word boundary into a single space, but I'd like to avoid adding another tool to the mix unless its necessary.
Suggestions? Simple replacement commands that I could use instead of column -t which accomplish the same thing? Playing OFS games with awk doesn't seem like a drop-in replacement.
You cannot change the builtin spacing of column. This leaves you with either switching to a different tool or post-processing. You can accomplish the ladder cheaply with sed to remove a single space before each number:
echo -en '111 22 3\n4 555 66\n' | column -t | sed 's/ \([0-9]\)/\1/g'
Output:
111 22 3
4 555 66
Switching from column to pr might be a good idea. pr has a whole lot more options for controlling output formatting, and can create columns as well...
For example:
echo -en '111 22 3\n4 555 66\n' | tr ' ' '\n' | pr -3aT -s' '
produces:
111 22 3
4 555 66
Not sure how to keep the alignment while still reducing the spaces, so it's not perfect.
The tr is in there because pr expects each entry on a single line.
I think the simplest way is to use tabs. In your output you are expecting a tab number of 5. So, in terminal type: tabs 5. This will change the tab-width to 5.
Then type:
echo -en '111 22 3\n4 555 66\n' | tr ' ' '\t'
or:
echo -en '111\t22\t3\n4\t555\t66\n'
Results:
111 22 3
4 555 66
For more info, type: man tabs
column now has an option to define that spacing.
column --help
-o, --output-separator <string> columns separator for table output (default is two spaces)
single space delimited
echo -en '111 22 3\n4 555 66\n' | column -t -o ' '
111 22 3
4 555 66
and pipe delimited
echo -en '111 22 3\n4 555 66\n' | column -t -o '|'
111|22 |3
4 |555|66
Related
A text file (file name: 1.txt):
Incoming_Queries_A: 13201096
Incoming_Queries_A6: 946
Incoming_Queries_AAAA: 1288191
Incoming_Queries_ANY: 31280
Incoming_Queries_AXFR: 5
Incoming_Queries_CNAME: 410
Incoming_Queries_DS: 20
Incoming_Queries_MX: 854
Incoming_Queries_NS: 97217
Incoming_Queries_PTR: 1011409
Incoming_Queries_SOA: 5006
Incoming_Queries_SPF: 1
Incoming_Queries_SRV: 3555
Incoming_Queries_TXT: 511
Incoming_Requests_IQUERY: 11
Incoming_Requests_NOTIFY: 1
Incoming_Requests_QUERY: 15640501
Incoming_Requests_STATUS: 1
Incoming_Requests_UPDATE: 5
I want to remove all strings before tab (include tab) in a line of text and print the output(example: 13201096 ) to standard out.
Example:
# egrep -i "Incoming_Queries_A:" ./1.txt | sed 's/.Incoming_Queries_A:\t//'
Output:
Incoming_Queries_A: 13201096
But I only want to output 13201096
How to fix it? thanks
sed doesn't automatically handle escaped chars (such as \t and \n) on bash. You can handle it in two different ways:
You can replace the \t by an actual tab in your expression. To hit a tab in the terminal, you do Control-V, then hit the TAB key: Ctrl-V and then Tab.
(this one seems far more elegant, IMO) you can force sed to interpret your \t, by placing a $ before your substitution string. This way, your command would be like:
egrep -i "Incoming_Queries_A:" ./1.txt | sed $'s/Incoming_Queries_A:\t//'
(I removed that . before Incoming_Queries_A: - probably a typo/desperate tentative)
Hope that helps.
Since you only need the second column, you can use cut:
cut -f2 file.txt
With awkyou can do this like that:
awk '{print $2}' 1.txt
13201096
946
1288191
31280
5
410
20
854
97217
1011409
5006
1
3555
511
11
1
15640501
1
5
Or awk '/Incoming_Queries_A:/ {print $2}' /tmp/t.txt for get only the line you wanted
You could try the below GNU sed command,
sed -r 's/^.*\t//' file
So I have a txt file where I need to extract every third number and print it to separate file using Terminal. The txt file is just a long list of numbers, tab delimited:
18 25 0 18 24 5 18 23 5 18 22 8.2 ...
I know there is a way to do this using sed or awk, but so far I've only been able to extract every third line by using:
awk 'NR%3==1' testRain.txt > rainOnly.txt
So here's the answer (or rather, the answer I utilized!):
xargs -n1 < input.txt | awk '!(NR%3)' > output.txt
This gives you an output.txt that has every third number of the original file as a separate line.
A quick pipe line to extract every 3rd number:
$ xargs -n1 < file | sed '3~3!d'
0
5
5
8.2
If you don't want each number on a newline throw the result back through xargs:
$ xargs -n1 < file | sed '3~3!d' | xargs
0 5 5 8.2
Use redirection to store the output in a new file:
$ xargs -n1 < file | sed '3~3!d' | xargs > new_file
With awk using a simple for loop you could do:
$ awk '{for(i=3;i<=NF;i+=3)print $i}' file
0
5
5
8.2
or (adds a trailing tab):
$ awk '{for(i=3;i<=NF;i+=3)printf "%s\t",$i;print ""}' file
0 5 5 8.2
Or by setting the value of RS (adds trailing newline):
$ awk '!(NR%3)' RS='\t' file
0
5
5
8.2
$ awk '!(NR%3)' RS='\t' ORS='\t' file
0 5 5 8.2
You can print every third character by substituting the next two with nothing, globally. When the count straddles a newline, using Perl might be the simplest solution:
perl -p000 -e 's/(.)../$1/gs'
If you want the first, fourth etc character from every line, a line-oriented tool like sed suffices:
sed 's/\(.\)../\1/g'
Using grep -P
grep -oP '([^\t]+\t){2}\K[^\t\n]+' file
0
5
5
8.2
This might work for you (GNU sed):
sed -r 's/(\S+\s){3}/\1/g;s/\s$//' file
#user2718946
Your solution was close, but here you are without xarg.
awk 'NR%3==1' RS=" " file
18
18
18
18
Different start:
awk 'NR%3==0' RS=" " file
0
5
5
8.2
I am new to bash and having a tough time figuring this out.
Using sed, could anyone help me in finding only even numbers in a given file?
I figured out how to find all numbers starting from [0,2,4,6,8] using this:
sed -n 's/^[0-9]*[02468] /&/w even' <file
But this doesn't guarantee that the number is even for sure.
I am having trouble in finding if the matched number ends with either [0,2,4,6,8] for it to be even for sure.
So can any one help me out with this?
Your regex looks a bit weird and I am not sure what you want to do, but this should help:
sed -r -n 's/^[0-9]*?[02468] /even/g'
-r to enable extended regex, *? to make it non-greedy, and /g to perform replacement globally for all lines in file.
Your command should work fine assuming that there is a space after all even numbers and that they are all at the beginning of the lines:
$ echo 'foo
1231
2220
1254 ' | sed -n '/[0-9]*[02468] /p'
2220
1254
Also note that, as you don't actually do a substitution, you don't need the s command. Use an address (pattern) specifier and w command (like I did above with the p command).
To make sure that the even digit is the last, but is not necessarily followed by a space, you can do something like
$ echo 'foo
1231
2220
1254 ' | sed -n '/[0-9]*[02468]\($\|[^0-9]\)/p'
2220
1254
Actually, your case looks more like a use case for grep, not sed, because you do filtering rather than editing. Everything becomes easier with GNU grep, as you can do
$ echo 'foo
1231
2220
1254 ' | grep -P '\d*[02468](?!\d)'
2220
1254
Just append > even to the command to make it write to the file even.
$ cat file
1
2
3
498
57
12345678
$ awk '$0%2' file
1
3
57
$ awk '!($0%2)' file
2
498
12345678
Why don't you find the numbers ending with [02468] ?
I have a verilog file that has multiple modules defined containing various input and output variables .
I need to find out last occurrence of such variable (input/output) using sed script.
I run the following command
address=sed -n '100,200{/output/=};100,200{/input/=}' file.txt
its giving me output as 102 103 104 105 106
while I want only 106.
Please suggest me some way.
This might work for you:
sed '100,200{/input\|output/=};d' file.txt | sed '$!d'
or perhaps as you intended:
address=$(sed '100,200{/input\|output/=};d' file.txt | sed '$!d')
sed -n '100,200p' foo.txt | awk '/input/{s=NR} /output/{s=NR} END{print s}'
you don't have to use sed, of course sed/awk could do it. try this:
grep -nE "input|output" test.txt|tail -1|cut -f1 -d:
edit
do you want this?
kent$ echo "102 103 104 105 106"|awk '{print $NF}'
106
edit again
kent$ another=$(echo "102 103 104 105 106"|awk '{print $NF}')
kent$ echo $another
106
You could do:
nl -ba < file.txt | sed -n '100,200{/output\|input/h};$x;$p'
I've got a busybox system which doesn't have uniq and I'd like to generate a unique list of duplicated lines.
A plain uniq emulated in awk would be:
sort <filename> | awk '!($0 in a){a[$0]; print}'
How can I use awk (or sed for that matter, not perl) to accomplish:
sort <filename> | uniq -d
On a busybox system, you might need to save bytes. ;-)
awk ++a[\$0]==2
Could do this (needn't sort it):
awk '{++a[$0]; if(a[$0] == 2) print}'
This might work for you:
# make some test data
seq 25 >/tmp/a
seq 3 3 25 >>/tmp/a
seq 5 5 25 >>/tmp/a
# run old command
sort -n /tmp/a | uniq -d
3
5
6
9
10
12
15
18
20
21
24
25
# run sed command
sort -n /tmp/a |
sed ':a;$bb;N;/^\([^\n]*\)\(\n\1\)*$/ba;:b;/^\([^\n]*\)\(\n\1\)*/{s//\1/;P};D'
3
5
6
9
10
12
15
18
20
21
24
25