print lines if $2<25 from text files with sed or awk - sed

I would like to print $1 and $2 if $2<25from text files. I also need to get the total number of students with marks less than 25 from all files. How can I do this with awk or sed?
students marks
jerry 12
peter 35
john 5
jerry 15
john 10
Desired output
jerry 12
john 5
jerry 15
john 10
Total no:of students :- 4

In awk:
$ awk '$2<25 {print; i++} END{print "\nTotal number of students:- "i}' file
Output:
jerry 12
john 5
jerry 15
john 10
Total number of students:- 4
If you want the output sorted by grade (lowest to highest):
$ sort -n -k2,2 file | awk '$2<25 {print; i++} END{print "\nTotal number of students:- "i}'
Sorted Output:
john 5
john 10
jerry 12
jerry 15
Total number of students:- 4
-n numerical sort;
-k2,2 sort on the second field.

awk '$2<25{count++ ; print}END{print "Total No of Students :-",count}' your_file
tested below:
> awk '$2<25{count++ ; print}END{print "Total No of Students :-",count}' temp
jerry 12
john 5
jerry 15
john 10
Total No of Students :- 4

Related

how print lines between 3 & 6 lines using sed?

1 ajar 45000
2 Sunil 25000
3 varoom 50000
4 Amit 47000
5 tanru 15000
6 Deepak 23000
7 Sunil 13000
8 sattvic 80000
I did it using awk. I want using sed command
$ awk 'NR==3, NR==6 {print NR,$0}' employee.txt
sed -n '3,6p' employee.txt
-n tells sed to not print each line;
3,6 is an "address", it tells sed to only apply the following command to the given range of lines;
p tells sed to print the line.

How to skip a line every two lines starting by skipping the first line?

Here's my code : ls -lt | sed -n 'p;n'
That code makes me skip from a line to another when listing file names but doesn't start by skipping the first one, how to make that happen?
Here's an exemple without my code to skip to make it clear:
And here's an exemple of when I use the skip code:
You have to invert your sed command: it should be n;p instead of p;n:
Your code:
for x in {1..20}; do echo $x ; done | sed -n 'p;n'
1
3
5
7
9
11
13
15
17
19
The version with sed inverted:
for x in {1..20}; do echo $x ; done | sed -n 'n;p'
Output:
2
4
6
8
10
12
14
16
18
20
You can use sed's ~ operator: first~step
$ seq 1 10 | sed -n '1~2p'
1
3
5
7
9
$ seq 1 10 | sed -n '2~2p'
2
4
6
8
10

Character count (length) within specific column

Is there a one-line method to obtain character length for strings held within a specific column of a tab-delimited .txt file and then append these counts onto the final column (number of columns may be variable)?
Sample Data:
1 AA
2 BBB
3 CCCCC
4 EE
5 DDD
6 AAA
7 FFFFF
8 AA
9 BBB
10 NNN
To get the counts, I have attempted to use:
perl -lane 'print length $F[2]' in > out
perl -F, -Mopen=:locale -lane 'print length $F[2]' in > out
However, the results are empty.
I have also tried:
perl -lane '$_.=$F[2]; print length $_'
But this, as I now realise, prints the number of characters for the entire line rather than a specific column.
I am not sure how I would then append the final column.
Desired Output (when counting column 2):
1 AA 2
2 BBB 3
3 CCCCC 5
4 EE 2
5 DDD 3
6 AAA 3
7 FFFFF 5
8 AA 2
9 BBB 3
10 NNN 3
It seems that you were close. Perl array indices start at zero, so how about using the length of $F[1]? You will also need some sort of separator
perl -lape '$_ .= "\t". length($F[1])' input
output
1 AA 2
2 BBB 3
3 CCCCC 5
4 EE 2
5 DDD 3
6 AAA 3
7 FFFFF 5
8 AA 2
9 BBB 3
10 NNN 3
If you want the output exactly as you show, then you will need to use printf like this
perl -lane 'printf qq{%-4d%-8s%d\n}, #F, length($F[1])' input
output
1 AA 2
2 BBB 3
3 CCCCC 5
4 EE 2
5 DDD 3
6 AAA 3
7 FFFFF 5
8 AA 2
9 BBB 3
10 NNN 3

Find "N" minimum and "N" maximum values with respect to a column in the file and print the specific rows

I have a tab delimited file such as
Jack 2 98 F
Jones 6 25 51.77
Mike 8 11 61.70
Gareth 1 85 F
Simon 4 76 4.79
Mark 11 12 38.83
Tony 7 82 F
Lewis 19 17 12.83
James 12 1 88.83
I want to find the N minimum values and N maximum values (more than 5) in th the last print the rows that has those values. I want to ignore the rows with E. For example, if I want minimum two values and maximum in above data, my output would be
Minimum case
Simon 4 76 4.79
Lewis 19 17 12.83
Maximum case
James 12 1 88.83
Mike 8 11 61.70
I can ignore the columns that does not have numeric value in fourth column using
awk -F "\t" '$4+0 != $4{next}1' inputfile.txt
I can also pipe this output and find one minimum value using
awk -F "\t" '$4+0 != $4{next}1' inputfile.txt |awk 'NR == 1 || $4 < min {line = $0; min = $4}END{print line}'
and similarly for maximum value, but how can I extend this to more than one values like 2 values in the toy example above and 10 cases for my real data.
n could be a variable. in this case, I set n=3. not, this may have problem if there are lines with same value in last col.
kent$ awk -v n=3 '$NF+0==$NF{a[$NF]=$0}
END{ asorti(a,k,"#ind_num_asc")
print "min:"
for(i=1;i<=n;i++) print a[k[i]]
print "max:"
for(i=length(a)-n+1;i<=length(a);i++)print a[k[i]]}' f
min:
Simon 4 76 4.79
Lewis 19 17 12.83
Mark 11 12 38.83
max:
Jones 6 25 51.77
Mike 8 11 61.70
James 12 1 88.83
You can get the minimum and maximum at once with a little redirection:
minmaxlines=2
( ( grep -v 'F$' inputfile.txt | sort -n -k4 | tee /dev/fd/4 | head -n $minmaxlines >&3 ) 4>&1 | tail -n $minmaxlines ) 3>&1
Here's a pipeline approach to the problem.
$ grep -v 'F$' inputfile.txt | sort -nk 4 | head -2
Simon 4 76 4.79
Lewis 19 17 12.83
$ grep -v 'F$' inputfile.txt | sort -rnk 4 | tail -2
Mike 8 11 61.70
James 12 1 88.83

bash merge files by matching columns

I do have two files:
File1
12 abc
34 cde
42 dfg
11 df
9 e
File2
23 abc
24 gjr
12 dfg
8 df
I want to merge files column by column (if column 2 is the same) for the output like this:
File1 File2
12 23 abc
42 12 dfg
11 8 df
34 NA cde
9 NA e
NA 24 gjr
How can I do this?
I tried it like this:
cat File* >> tmp; sort tmp | uniq -c | awk '{print $2}' > column2; for i in
$(cat column2); do grep -w "$i" File*
But this is where I am stuck...
Don't know how after greping I should combine files column by column & write NA where value is missing.
Hope someone could help me with this.
Since I was testing with bash 3.2 running as sh (which does not have process substitution as sh), I used two temporary files to get the data ready for use with join:
$ sort -k2b File2 > f2.sort
$ sort -k2b File1 > f1.sort
$ cat f1.sort
12 abc
34 cde
11 df
42 dfg
9 e
$ cat f2.sort
23 abc
8 df
12 dfg
24 gjr
$ join -1 2 -2 2 -o 1.1,2.1,0 -a 1 -a 2 -e NA f1.sort f2.sort
12 23 abc
34 NA cde
11 8 df
42 12 dfg
9 NA e
NA 24 gjr
$
With process substitution, you could write:
join -1 2 -2 2 -o 1.1,2.1,0 -a 1 -a 2 -e NA <(sort -k2b File1) <(sort -k2b File2)
If you want the data formatted differently, use awk to post-process the output:
$ join -1 2 -2 2 -o 1.1,2.1,0 -a 1 -a 2 -e NA f1.sort f2.sort |
> awk '{ printf "%-5s %-5s %s\n", $1, $2, $3 }'
12 23 abc
34 NA cde
11 8 df
42 12 dfg
9 NA e
NA 24 gjr
$