Hi I'm currently writing shell script and need to get the value from a column when the next column matches a value. An example of the output to be search is below.
con1{649}: AES_CBC_256/HMAC_SHA2_256_128/MODP_2048, 306081 bytes_i, 444452 bytes_o, rekeying in 6 minutes
So in the above output I'm look to extract the "306081" but as the column can move I want to grab the column befores value of "bytes_i"
I've tried the following but it fails to return as value ipsec statusall | grep con1{ | awk -v b="bytes_i" '{for (i=1;i\<=NF;i++) { if ($i == b) { print i } }}
I was thinking if I could get the colume number of bytes_i I could subtrack 1 and then use awk to grab that column value but I'm open to suguestions.
I'm sure there is probably a more optimal way to do this but I've managed to solve it with the following shell script. The following function in my code with $1 being the connection to look for and $2 is the value I was looking for to get the number before it.
ipsec_get_bytes()
{
for ipsec in $(ipsec statusall | grep $1\{ | grep bytes_ | tr -su ' ' '\n' | tail -n +2)
do
if [ "$ipsec" == "$2" ] || [ "$ipsec" == "$2," ]; then
echo $ipsec_prev
break
else
ipsec_prev=$ipsec
fi
done
}
You could use sed with back-references:
ipsec statusall | sed -n 's/\(.*\) \([0-9]*\) \(bytes_i.*\)/\2/p'
This will return the numeric string that immediately precedes bytes_i.
Related
my command looks like:
for i in *.fasta ; do
parallel -j 10 python script.py $i > $i.out
done
I want to add a test condition to this loop where it only executes the parallel python script if there are no identical lines in the .fasta file
an example .fasta file below:
>ref2
GGTTAGGGCCGCCTGTTGGTGGGCGGGAATCAAGCAGCATTTTGGAATTCCCTACAATCC
CCAAAGTCAAGGAGTAGTAGAATCTATGCGGAAAGAATTAAAGAAAATTATAGGACAGGT
AAGAGATCAGGCTGAACATCTTAAGACAGCAGTACAAATGGC
>mut_1_2964_0
AAAAAAAAACGCCTGTTGGTGGGCGGGAATCAAGCAGGTATTTGGAATTCCCTACAATCC
CCAAAGTCAAGGAGTAGTAGAATCTATGTTGAAAGAATTAAAGAAAATTATAGGACAGGT
AAGAGATCAGGCTGAACATCTTAAGACAGCAGTACAAATGGC
an example .fasta file that I would like excluded because lines 2 and 4 are identical.
>ref2
GGTTAGGGCCGCCTGTTGGTGGGCGGGAATCAAGCAGCATTTTGGAATTCCCTACAATCC
CCAAAGTCAAGGAGTAGTAGAATCTATGCGGAAAGAATTAAAGAAAATTATAGGACAGGT
AAGAGATCAGGCTGAACATCTTAAGACAGCAGTACAAATGGC
>mut_1_2964_0
GGTTAGGGCCGCCTGTTGGTGGGCGGGAATCAAGCAGCATTTTGGAATTCCCTACAATCC
CCAAAGTCAAGGAGTAGTAGAATCTATGCGGAAAGAATTAAAGAAAATTATAGGACAGGT
AAGAGATCAGGCTGAACATCTTAAGACAGCAGTACAAATGGC
The input files always have 4 lines exactly, and lines 2 and 4 are always the lines to be compared.
I've been using sort file.fasta | uniq -c to see if there are identical lines, but I don't know how to incorporate this into my bash loop.
EDIT:
command:
for i in read_00.fasta ; do lines=$(awk 'NR % 4 == 2' $i | sort | uniq -c | awk '$1 > 1'); if [ -z "$lines" ]; then echo $i >> not.identical.txt; fi;
read_00.fasta:
>ref
GGTGCCCACACTAATGATGTAAAACAATTAACAGAGGCAGTGCAAAAAATAACCACAGAAAGCATAGTAATATGGGGAAAGACTCCTAAATTTAAACTGCCCATACAAAAGGAAACATGGGAAACATGGTGGACAGAGTATTGGCAAGCCACCTGGATTCCTGAGTGGGAGTTTGTTAATACCCCTCCCTTAGTGAAATTATGGTACCAGTTAGA
>mut_1_2964_0
GGTGCCCACACTAATGATGTAAAACAATTAACAGAGGCAGTGCAAAAAATAACCACAGAAAGCATAGTAATATGGGGAAAGACTCCTAAATTTAAACTGCCCATACAAAAGGAAACATGGGAAACATGGTGGACAGAGTATTGGCAAGCCACCTGGATTCCTGAGTGGGAGTTTGTTAATACCCCTCCCTTAGTGAAATTATGGTACCAGTTAGA
Verify those specifc lines content with below awk and exit failure when lines were identical or exit success otherwise (instead of exit, you can do whatever you want to print/do for you);
awk 'NR==2{ prev=$0 } NR==4{ if(prev==$0) exit 1; else exit }' "./$yourFile"
or to output fileName instead when 2nd and 4th lines were differ:
awk 'NR==2{ prev=$0 } NR==4{ if(prev!=$0) print FILENAME; exit }' ./*.fasta
Using the exit-status of the first command then you can easily execute your next second command, like:
for file in ./*.fasta; do
awk 'NR==2{ prev=$0 } NR==4{ if(prev==$0) exit 1; else exit }' "$file" &&
{ parallel -j 10 python script.py "$file" > "$file.out"; }
done
I have a bash script which echos out an html file like
this ./foo.sh > out.html In the html there are timestamps in the
following format 2019-02-24T17:02:19Z.
I wrote a function to
convert the time stamp to the time delta between the timestamp
and now.
my_uptime(){
START=$(date -d "$1" "+%s")
NOW=$(date "+%s")
STD=$(echo "($NOW-$START)/3600" | bc)
MIN=$(echo "(($NOW-$START)/60)%60" | bc)
echo "Uptime $STD h $MIN min"
}
Now I want to replace the timestamp with the output of my_uptime
directly in the stream. I tried this:
echo "<p>some html</p>
2019-02-24T17:02:19Z
<p>some more html</p>" | sed -r "s/[0-9\-]+T[0-9:]+Z/$(my_uptime \0)/"
This fails because the command substitution doesn't recognize the
back reference and puts in a literal 0. Is there another way to
achieve this? Preferably directly in the stream.
... | sed -r "s/[0-9\-]+T[0-9:]+Z/$(my_uptime \0)/"
This code is attempting to pass the matched value from sed's s/// into the shell function. However, $(...) is expanded before sed even sees it.
Using sed is probably not appropriate here.
Here's a perl replacement that effectively combines your shell function and sed:
... | perl -ple '
if (/([0-9-]+T[0-9:]+Z)/) {
$s = `date -d "$1" +%s`;
$n = time;
$h = int(($n-$s)/3600);
$m = int(($n-$s)/60)%60;
$_ = "Uptime $h h $m min";
}'
You could probably do something similar in awk.
A system call I'm making in Perl as follows:
#filesystems = `/nas/bin/nas_fs -query:TypeNumeric==1:IsRoot==False -fields:RWServers,ROServers,Name,RWMountpoint, -format:%L:%L:%s:%s\\\\n`;
works well. It gets me the desired info:
server_5::fs_pipeline_950155:/root_vdm_30/fs_pipeline_95015
:server_7:fs_nfs_esx_wks_vms:
server_7::fs_ovid3:/fs_ovid3
If, however, I want to really only populate #filesystems with entries for which there's a value in column 1 (i.e. the first value ... lines 1 and 3 in the example above), I'm unsure how to achieve this. awk -F through a pipe doesn't seem to work.
You can do this in your script after #filesystems is populated:
# Removes blank lines and lines starting with :
#filesystems = grep { !/(^:|^\s*$/ } #filesystems;
You could do this in Perl itself:
my #temp = `/nas/bin/nas_fs -query:TypeNumeric==1:IsRoot==False -fields:RWServers,ROServers,Name,RWMountpoint, -format:%L:%L:%s:%s\\\\n`;
my #filesystems = grep { !/^:/ } #temp;
This filters out any entries which begin with a colon from the list.
Alternatively, you could invoke another process:
my #filesystems = `/nas/bin/nas_fs <args> | grep -v '^:'`;
grep -v returns only lines that don't match the pattern, so lines beginning with a colon are excluded.
I have a "pipe-separated" file that has about 20 columns. I want to just hash the first column which is a number like account number using sha1sum and return the rest of the columns as is.
Whats the best way I can do this using awk or sed?
Accountid|Time|Category|.....
8238438|20140101021301|sub1|...
3432323|20140101041903|sub2|...
9342342|20140101050303|sub1|...
Above is an example of the text file showing just 3 columns. Only the first column has the hashfunction implemented on it. Result should like:
Accountid|Time|Category|.....
104a1f34b26ae47a67273fe06456be1fe97f75ba|20140101021301|sub1|...
c84270c403adcd8aba9484807a9f1c2164d7f57b|20140101041903|sub2|...
4fa518d8b005e4f9a085d48a4b5f2c558c8402eb|20140101050303|sub1|...
What the Best Way™ is is up for debate. One way to do it with awk is
awk -F'|' 'BEGIN { OFS=FS } NR == 1 { print } NR != 1 { gsub(/'\''/, "'\'\\\\\'\''", $1); command = ("echo '\''" $1 "'\'' | sha1sum -b | cut -d\\ -f 1"); command | getline hash; close(command); $1 = hash; print }' filename
That is
BEGIN {
OFS = FS # set output field separator to field separator; we will use
# it because we meddle with the fields.
}
NR == 1 { # first line: just print headers.
print
}
NR != 1 { # from there on do the hash/replace
# this constructs a shell command (and runs it) that echoes the field
# (singly-quoted to prevent surprises) through sha1sum -b, cuts out the hash
# and gets it back into awk with getline (into the variable hash)
# the gsub bit is to prevent the shell from barfing if there's an apostrophe
# in one of the fields.
gsub(/'/, "'\\''", $1);
command = ("echo '" $1 "' | sha1sum -b | cut -d\\ -f 1")
command | getline hash
close(command)
# then replace the field and print the result.
$1 = hash
print
}
You will notice the differences between the shell command at the top and the awk code at the bottom; that is all due to shell expansion. Because I put the awk code in single quotes in the shell commands (double quotes are not up for debate in that context, what with $1 and all), and because the code contains single quotes, making it work inline leads to a nightmare of backslashes. Because of this, my advice is to put the awk code into a file, say foo.awk, and run
awk -F'|' -f foo.awk filename
instead.
Here's an awk executable script that does what you want:
#!/usr/bin/awk -f
BEGIN { FS=OFS="|" }
FNR != 1 { $1 = encodeData( $1 ) }
47
function encodeData( fld ) {
cmd = sprintf( "echo %s | sha1sum", fld )
cmd | getline output
close( cmd )
split( output, arr, " " )
return arr[1]
}
Here's the flow break down:
Set the input and output field separators to |
When the row isn't the first (header) row, re-assign $1 to an encoded value
Print the entire row when 47 is true (always)
Here's the encodeData function break down:
Create a cmd to feed data to sha1sum
Feed it to getline
Close the cmd
On my system, there's extra info after sha1sum, so I discard it by spliting the output
Return the first field of the sha1sum output.
With your data, I get the following:
Accountid|Time|Category|.....
104a1f34b26ae47a67273fe06456be1fe97f75ba|20140101021301|sub1|...
c84270c403adcd8aba9484807a9f1c2164d7f57b|20140101041903|sub2|...
4fa518d8b005e4f9a085d48a4b5f2c558c8402eb|20140101050303|sub1|...
running by calling awk.script data (or ./awk.script data if you bash)
EDIT by EdMorton:
sorry for the edit, but your script above is the right approach but needs some tweaks to make it more robust and this is much easier than trying to describe them in a comment:
$ cat tst.awk
BEGIN { FS=OFS="|" }
NR==1 { for (i=1; i<=NF; i++) f[$i] = i; next }
{ $(f["Accountid"]) = encodeData($(f["Accountid"])); print }
function encodeData( fld, cmd, output ) {
cmd = "echo \047" fld "\047 | sha1sum"
if ( (cmd | getline output) > 0 ) {
sub(/ .*/,"",output)
}
else {
print "failed to hash " fld | "cat>&2"
output = fld
}
close( cmd )
return output
}
$ awk -f tst.awk file
104a1f34b26ae47a67273fe06456be1fe97f75ba|20140101021301|sub1|...
c84270c403adcd8aba9484807a9f1c2164d7f57b|20140101041903|sub2|...
4fa518d8b005e4f9a085d48a4b5f2c558c8402eb|20140101050303|sub1|...
The f[] array decouples your script from hard-coding the number of the field that needs to be hashed, the additional args for your function make them local and so always null/zero on each invocation, the if on getline means you won't return the previous success value if it fails (see http://awk.info/?tip/getline) and the rest is maybe more style/preference with a bit of a performance improvement.
Basically right now I have a for loop running that runs a series of tests. Once the tests pass I input the results into a csv file:
for (( some statement ))
do
if[[ something ]]
input this value into a specific row and column
fi
done
What I can't figure out right now is how to input a specific value into a specific cell in the csv file. I know in awk you can read a cell with this command:
awk -v "row=2" -F'#' 'NR == row { print $2 }' some.csv and this will print the cell in the 2nd row and 2nd column. I need something similar to this except it can input a value into a specific cell instead of read it. Is there a function that does this?
You can use the following:
awk -v value=$value -v row=$row -v col=$col 'BEGIN{FS=OFS="#"} NR==row {$col=value}1' file
And set the bash values $value, $row and $col. Then you can redirect and move to the original:
awk ... file > new_file && mv new_file file
This && means that just if the first command (awk...) is executed successfully, then the second one will be performed.
Explanation
-v value=$value -v row=$row -v col=$col pass the bash variables to awk. Note value, row and col could be other names, I just used the same as bash to make it easier to understand.
BEGIN{FS=OFS="#"} set the Field Separator and Output Field Separator to be #. The OFS="#" is not necessary here, but can be useful in case you do some print.
NR==row {$col=value} when the number of record (number of line here) is equal to row, then set the col column with value value.
1 perform the default awk action: {print $0}.
Example
$ cat a
hello#how#are#you
i#am#fine#thanks
hoho#haha#hehe
$ row=2
$ col=3
$ value="XXX"
$ awk -v value=$value -v row=$row -v col=$col 'BEGIN{FS=OFS="#"} NR==row {$col=value}1' a
hello#how#are#you
i#am#XXX#thanks
hoho#haha#hehe
Your question has a 'perl' tag so here is a way to do it using Tie::Array::CSV which allows you to treat the CSV file as an array of arrays and use standard array operations:
use strict;
use warnings;
use Tie::Array::CSV;
my $row = 2;
my $col = 3;
my $value = 'value';
my $filename = '/path/to/file.csv';
tie my #file, 'Tie::Array::CSV', $filename, sep_char => '#';
$file[$row][$col] = $value;
untie #file;
using sed
row=2 # define the row number
col=3 # define the column number
value="value" # define the value you need change.
sed "$row s/[^#]\{1,\}/$value/$col" file.csv # use shell variable in sed to find row number first, then replace any word between #, and only replace the nominate column.
# So above sed command is converted to sed "2 s/[^#]\{1,\}/value/3" file.csv
If the above command is fine, and your sed command support the option -i, then run the command to change the content directly in file.csv
sed -i "$row s/[^#]\{1,\}/$value/$col" file.csv
Otherwise, you need export to temp file, and change the name back.
sed "$row s/[^#]\{1,\}/$value/$col" file.csv > temp.csv
mv temp.csv file.csv