get the lines that have date in comparing to the given date - date

I have a log file that has text (a list of filenames), which are of this format:
/dir1/dir2/dir3/dir4/dir5/dir6/dir7/us.ca.sf.release123.20170401164500025
/dir1/dir2/dir3/dir4/dir5/dir6/dir7/us.ca.sf.release123.20170401170000022
/dir1/dir2/dir3/dir4/dir5/dir6/dir7/us.ca.sf.release123.20170401171500018
/dir1/dir2/dir3/dir4/dir5/dir6/dir7/us.ca.sf.release123.20170401173000018
Now, in a ksh script, I'm trying to retrieve two lists with the lines - one that have date OLDER date and other list that have NEWER date for a give date: 20170401 17:12
The first two lines into one older_list and last two lines into newer_list:
like,
older_list file has these:
/dir1/dir2/dir3/dir4/dir5/dir6/dir7/us.ca.sf.release123.20170401164500025
/dir1/dir2/dir3/dir4/dir5/dir6/dir7/us.ca.sf.release123.20170401170000022
newer_list file has these:
/dir1/dir2/dir3/dir4/dir5/dir6/dir7/us.ca.sf.release123.20170401171500018
/dir1/dir2/dir3/dir4/dir5/dir6/dir7/us.ca.sf.release123.20170401173000018
can you guys please throw me the script that can handle this.
our is an old version
oslevel
5.3.0.0
Thanks

You could use for example awk tool in order to achieve your task:
Example:
> TIME=201704011700 # for same matter you could set it with current date TIME=$(date %Y%m%d%H%M)
> awk -F'\.' -v time=$TIME '{if (substr($5,1,12) >= time) {print $0 }}' your_input_log_file > fitered_after_$TIME
> awk -F'\.' -v time=$TIME '{if (substr($5,1,12) < time) {print $0 }}' your_input_log_file > fitered_before_$TIME
Explanation:
TIME is an variable defined in your shell
awk parameter -F
will define the . as a field delimiter thus your field we are
interested in as per your example would be $5.
-v option will
define for awk a variable time before starting with the value of
your TIME shell variable.

Related

Filtering tshark output for .csv. Preventing errors from missing fields

I am trying to filter a pcap file in tshark wit a lua script and ultimately output it to a .csv. I am most of the way there but I am still running into a few issues.
This is what I have so far
tshark -nr -V -X lua_script:wireshark_dissector.lua -r myfile.pcap -T fields -e frame.time_epoch -e Something_UDP.field1 -e Something_UDP.field2 -e Something_UDP.field3 -e Something_UDP.field4 -e Something_UDP.field5 -e Something_UDP.field6 -e Something_UDP.field15 -e Something_UDP.field16 -e Something_UDP.field18 -e Something_UDP.field22 -E separator=,
Here is an example of what the frames look like, sort of.
frame 1
time: 1626806198.437893000
Something_UDP.field1: 0
Something_UDP.field2: 1
Something_UDP.field3:1
Something_UDP.field5:1
Something_UDP.field6:1
frame 2
time: 1626806198.439970000
Something_UDP.field8: 1
Something_UDP.field9: 0
Something_UDP.field13: 0
Something_UDP.field14: 0
frame 3
time: 1626806198.440052000
Something_UDP.field15: 1
Something_UDP.field16: 0
Something_UDP.field18: 1
Something_UDP.field19:1
Something_UDP.field20:1
Something_UDP.field22: 0
Something_UDP.field24: 0
The output I am looking for would be
1626806198.437893000,0,1,1,,1,1,1,,,,,
1626806198.440052000,,,,,,,,,1,0,,1,1,1,,0,0,,,,
That is if the frame contains one of the fields I am looking for it will output its value followed by a comma but if that field isn't there it will output a comma. One issue is that not every frame contains info that I am interested in and I don't want them to be outputted. Part of the issue with that is that one of the fields I need is epoch time and that will be in every frame but that is only important if the other fields are there. I could use awk or grep to do this but wondering if it can all be done inside tshark. The other issue is that the fields being requested will com from a text file and there may be fields in the text file that don't actually exist in the pcap file and if that happens I get a "tshark: Some fields aren't valid:" error.
In short I have 2 issues.
1: I need to print data only it the fields names match but not if the only match is epoch.
2: I need it to work even if one of the fields being requested doesn't exist.
I need to print data only it the fields names match but not if the only match is epoch.
Try using a display filter that mentions all the field names in which you're interested, with an "or" separating them, such s
-Y "Something_UDP.field1 or Something_UDP.field2 or Something_UDP.field3 or Something_UDP.field4 or Something_UDP.field5 or Something_UDP.field6 or Something_UDP.field15 or Something_UDP.field16 or Something_UDP.field18 or Something_UDP.field22"
so that only packets containing at least one of those fields will be processed.
I need it to work even if one of the fields being requested doesn't exist.
Then you will need to construct the command line on the fly, avoiding field names that aren't valid.
One way, in a script, to test whether a field is valid is to use the dftest command:
dftest Something_UDP.field1 >/dev/null 2>&1
will exit with a status of 0 if there's a field named "Something_UDP.field1" and will exit with a status of 2 if there isn't; if the scripting language you're using can check the exit status of a command to see if it succeeds, you can use that.

Getting Exiftool duration without fuzzy time

When I use exiftool to get the duration of an audio file, if the file is over 24 hours I get “1 day 1:23:45” instead of 25:23:45. Sometimes I get “approx 13:17:23”.
Is it possible to tell exiftool to only return HH:MM:SS regardless of how long the file actually is and if it thinks the time is approximate or not (I can strip out the approximate if I have to, but if there’s a way to specify the output format I can’t find it)?
exiftool -d "%H:%M:%S" -Duration Audiobook.m4a
Duration : 1 days 1:17:20
This works, assuming there its no way to get exiftool to output the hours:
if [[ $DURA == *"days"* ]]; then
EXIF=$(exiftool -duration# "$FILE")
SEC=$(awk -F": " '/Dura/ {print $2}' <<<"$EXIF" |awk -F'.' '{print $1}')
HOR=$(($SEC / 3600))
MIN=$(($SEC % 3600/60))
SES=$(($SEC % 60))
DURA="$HOR:$MIN:$SES"
fi
Answer: no way to do this in efixtool, so work around somehow (I chose bash script, but whatever).

sh: can't return one result after comparing 2 files

as an example I will put different inputs to keep the privacy of my files and to avoid long text, these are of the following form :
INPUT1.cfg :
TC # aa # D317
TC # bb # D314
TC # cc # D315
TC # dd # D316
INPUT2.cfg
BL;nn;3
LY;ww;3
LO;xx;3
TC;vv;3
TC;dd;3
OD;pp;3
TC;aa;3
what I want to do is iterate the name (column 2) in the rows of input1 and compare with the name (column 2) in the rows of input2; if they match we will get the line of INPUT2 in an output file otherwise it will return that the table is not found, here is my try code:
#!/bin/bash
input1="input1.cfg";
input2="input2.cfg"
cat $input1|while read line
do
TableNameIN=`echo $line|cut -d"#" -f2`
cat $input2| while read line
do
TableNameOUT=`echo $line|cut -d";" -f2`
if echo "$TableNameOUT" | grep -q $TableNameIN;
then echo "$line" >> output.txt
else
echo "Table $TableNameIN non trouvé"
fi
done
done
this what i get as result :
Table bb not found
Table bb not found
Table bb not found
Table cc not found
Table cc not found
Table cc not found
I manage to write what is equal but the problem with my code is that it has in output "table not found" for each row whereas I just want to write only once at the end of the comparison of all the lines
here is the output i want to get :
Table bb not found
Table cc not found
Can any one help me with this , PS : I don't want to use awk because it's just a part of my code and i already use sh
Assumptions:
for file input2.cfg the 2nd column (table name) is unique
input2.cfg is not so large that we run the risk of using up all memory for storing intput2.cfg in an associative array (otherwise we could store the table names from input1.cfg's - assuming this is a smaller file - in the array and swap the processing order of the two files)
there are no explicit requirements for data to be sorted (otherwise we may need to add a sort or two)
a bash solution is sufficient (based on inclusion of the #!/bin/bash shebang in OPs current code)
There are many ways to slice-n-dice this one (awk being my preference but OP doesn't want to use awk). For this particular answer I'll pull the awk steps out into separate bash commands.
NOTE: While we could use a set of nested loops (as in the OPs code), I've opted to use an associative array to store input2.cfg thus eliminating the need to repeatedly scan input2.cfg.
#!/usr/bin/bash
input1=input1.cfg
input2=input2.cfg
> output.txt # clear out the target file
# load ${input2} into an associative array
unset lines
typeset -A lines # associative array for storing contents of ${input2}
while read -r line
do
x="${line%;*}" # use parameter expansion
tabname="${x#*;}" # to parse out table name
lines["${tabname}"]="${line}" # add to array
done < "${input2}"
# process ${input1}
while read -r c1 c2 tabname rest_of_line
do
[[ -v lines["${tabname}"] ]] && # if tabname has an entry in our array
echo "${lines[${tabname}]}" >> output.txt && # then dump the associated line (from ${input2}) to output.txt
continue # process next line from ${input1}
echo "Table ${tabname} not found" # otherwise print 'not found' message
done < "${input1}"
# display contents of output.txt
echo "++++++++++++++++ output.txt"
cat output.txt
echo "++++++++++++++++"
This generates the following:
Table bb not found
Table cc not found
++++++++++++++++ output.txt
TC;aa;3
TC;dd;3
++++++++++++++++

Using nzload to upload a file with two differing date formats

I am trying to load onto Netezza a file from a table in an Oracle database, the file contains two separate date formats - one field has the format
DD-MON-YY and the second field has the format DD-MON-YYYY hh24:MI:SS, is there any with in NZLOAD to cater for two different date formats within a file
Thanks
rob..
If your file is fixed-length, you can use zones
However, if its field delimited, you can use some of the preprocessing tools like sed to convert all the date / timestamp to one standard format, before piping the output to nzload.
for ex.,
1. 01-JAN-17
2. 01-JAN-2017 11:20:32
Lets convert the date field to same format
cat output.dat |\
sed -E 's/([0-9]{2})-([A-Z]{3})-([0-9]{2})/\1-\2-20\3/g' |\
nzload -dateStyle DMONY -dateDelim '-'
sed expression is pretty simple here, let's break it down
# looking for 2 digits followed by
# 3 characters and followed by
# 2 digits all separated by '-'
# elements are grouped with '()' so they can be referred by number
's/([0-9]{2})-([A-Z]{3})-([0-9]{2})
# reconstruct the date using group number and separator, prefix 20 to YY
/\1-\2-20\3
# apply globally
/g'
also in nzload we have specified the format of date and its delimiter.
Now we'll have to modify the regular expression depending upon different date formats and what they are getting converted to, this may not be an universal solution.

How to get Difference in time data kept in two different files in Unix?

I Have Two files each with some 200K Timestamps in a single column. I want to find the difference between each rows(mapped one to one) in seconds.
For example:
One file has 2013-06-04 11:21:28 and Second file 2013-06-04 11:21:55 in the same row, so I want to get the output as 27. That is 27 seconds.
Can some one help me with a Unix command to get this done?
https://github.com/hroptatyr/dateutils ddif to the rescue
ddiff 2012-03-01T12:17:00 2012-03-02T14:00:00
=>
92580s
paste -d, a b | while IFS=, read t1 t2
do
echo "$(( $( date -d "$t2" +%s ) - $( date -d "$t1" +%s ) ))"
done
That should do it.
Filenames are assumed to be "a" and "b".