Extracting all rows containing a specific datetime value (MATLAB) - matlab

I have a table which looks like this:
Entry number
Timestamp
Value1
Value2
Value3
Value4
5758
28-06-2018 16:30
34
63
34.2
60.9
5759
28-06-2018 17:00
33.5
58
34.9
58.4
5758
28-06-2018 16:30
34
63
34.2
60.9
5759
28-06-2018 17:00
33.5
58
34.9
58.4
5760
28-06-2018 17:30
33
53
35.2
58.5
5761
28-06-2018 18:00
33
63
35
57.9
5762
28-06-2018 18:30
33
61
34.6
58.9
5763
28-06-2018 19:00
33
59
34.1
59.4
5764
28-06-2018 19:30
28
89
33.5
64.2
5765
28-06-2018 20:00
28
89
33
66.1
5766
28-06-2018 20:30
28
83
32.5
67
5767
28-06-2018 21:00
29
89
32.2
68.4
Where '28-06-2018 16:30' is under one column. So I have 6 columns:
Entry number, Timestamp, Value1, Value2, Value3, Value4
I want to extract all rows that belong to '28-06-2018', i.e all data pertaining to that day. Since my table is too large I couldn't fit more data, however, the entries under the timestamp range for a couple of months.

t=table([5758;5759],["28-06-2018 16:30";"29-06-2018 16:30"],[34;33.5],'VariableNames',{'Entry number','Timestamp','Value1'})
t =
2×3 table
Entry number Timestamp Value1
____________ __________________ ______
5758 "28-06-2018 16:30" 34
5759 "29-06-2018 16:30" 33.5
t(contains(t.('Timestamp'),"28-06"),:)
ans =
1×3 table
Entry number Timestamp Value1
____________ __________________ ______
5758 "28-06-2018 16:30" 34

Related

Address and smoothen noise in sensor data

I have sensors data as below wherein under Data Column, there are 6rows containing value 45 in between preceding and following rows containing value 50. The requirement is to clean this data and impute with 50 (prev value) in the new_data column. Moreover, the no of noise records (shown as 45 in table) might either vary in number or with level of rows.
Case 1 (sample data) :-
Sl.no
Timestamp
Data
New_data
1
1/1/2021 0:00:00
50
50
2
1/1/2021 0:15:00
50
50
3
1/1/2021 0:30:00
50
50
4
1/1/2021 0:45:00
50
50
5
1/1/2021 1:00:00
50
50
6
1/1/2021 1:15:00
50
50
7
1/1/2021 1:30:00
50
50
8
1/1/2021 1:45:00
50
50
9
1/1/2021 2:00:00
50
50
10
1/1/2021 2:15:00
50
50
11
1/1/2021 2:30:00
45
50
12
1/1/2021 2:45:00
45
50
13
1/1/2021 3:00:00
45
50
14
1/1/2021 3:15:00
45
50
15
1/1/2021 3:30:00
45
50
16
1/1/2021 3:45:00
45
50
17
1/1/2021 4:00:00
50
50
18
1/1/2021 4:15:00
50
50
19
1/1/2021 4:30:00
50
50
20
1/1/2021 4:45:00
50
50
21
1/1/2021 5:00:00
50
50
22
1/1/2021 5:15:00
50
50
23
1/1/2021 5:30:00
50
50
I am thinking of a need to group these data ordered by timestamp asc (like below) and then could have a condition in place where it will have to check group by group in large sample data and if group 1 is same as group 3 , replace group 2 with group 1 values.
Sl.no
Timestamp
Data
New_data
group
1
1/1/2021 0:00:00
50
50
1
2
1/1/2021 0:15:00
50
50
1
3
1/1/2021 0:30:00
50
50
1
4
1/1/2021 0:45:00
50
50
1
5
1/1/2021 1:00:00
50
50
1
6
1/1/2021 1:15:00
50
50
1
7
1/1/2021 1:30:00
50
50
1
8
1/1/2021 1:45:00
50
50
1
9
1/1/2021 2:00:00
50
50
1
10
1/1/2021 2:15:00
50
50
1
11
1/1/2021 2:30:00
45
50
2
12
1/1/2021 2:45:00
45
50
2
13
1/1/2021 3:00:00
45
50
2
14
1/1/2021 3:15:00
45
50
2
15
1/1/2021 3:30:00
45
50
2
16
1/1/2021 3:45:00
45
50
2
17
1/1/2021 4:00:00
50
50
3
18
1/1/2021 4:15:00
50
50
3
19
1/1/2021 4:30:00
50
50
3
20
1/1/2021 4:45:00
50
50
3
21
1/1/2021 5:00:00
50
50
3
22
1/1/2021 5:15:00
50
50
3
23
1/1/2021 5:30:00
50
50
3
Moreover, there is also a need to add an exception like, if the next group is having similar pattern, not to change but to retain the data as it is.
Ex below : If group 1 and group 3 are same , impute group 2 with group 1 value.
But if group 2 and group 4 are same, do not change group 3 , retain same data in New_data.
Case 2:-
Sl.no
Timestamp
Data
New_data
group
1
1/1/2021 0:00:00
50
50
1
2
1/1/2021 0:15:00
50
50
1
3
1/1/2021 0:30:00
50
50
1
4
1/1/2021 0:45:00
50
50
1
5
1/1/2021 1:00:00
50
50
1
6
1/1/2021 1:15:00
50
50
1
7
1/1/2021 1:30:00
50
50
1
8
1/1/2021 1:45:00
50
50
1
9
1/1/2021 2:00:00
50
50
1
10
1/1/2021 2:15:00
50
50
1
11
1/1/2021 2:30:00
45
50
2
12
1/1/2021 2:45:00
45
50
2
13
1/1/2021 3:00:00
45
50
2
14
1/1/2021 3:15:00
45
50
2
15
1/1/2021 3:30:00
45
50
2
16
1/1/2021 3:45:00
45
50
2
17
1/1/2021 4:00:00
50
50
3
18
1/1/2021 4:15:00
50
50
3
19
1/1/2021 4:30:00
50
50
3
20
1/1/2021 4:45:00
50
50
3
21
1/1/2021 5:00:00
50
50
3
22
1/1/2021 5:15:00
50
50
3
23
1/1/2021 5:30:00
50
50
3
24
1/1/2021 5:45:00
45
45
4
25
1/1/2021 6:00:00
45
45
4
26
1/1/2021 6:15:00
45
45
4
27
1/1/2021 6:30:00
45
45
4
28
1/1/2021 6:45:00
45
45
4
29
1/1/2021 7:00:00
45
45
4
30
1/1/2021 7:15:00
45
45
4
31
1/1/2021 7:30:00
45
45
4
Reaching out for help in coding in postgresql to address above scenario. Please feel free to suggest any alternative approaches to solve above problem.
The query below should answer the need.
The first query identifies the rows which correspond to a change of
data.
The second query groups the rows between two successive changes of data and set up the corresponding range of timestamp
The third query is a recursive query which calculates the new_data in an
iterative way according to the timestamp order.
The last query display the expected result.
WITH RECURSIVE list As
(
SELECT no
, timestamp
, lag(data) OVER w AS previous
, data
, lead(data) OVER w AS next
, data IS DISTINCT FROM lag(data) OVER w AS first
, data IS DISTINCT FROM lead(data) OVER w AS last
FROM sensors
WINDOW w AS (ORDER BY timestamp ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING)
), range_list AS
(
SELECT tsrange(timestamp, lead(timestamp) OVER w, '[]') AS range
, previous
, data
, lead(next) OVER w AS next
, first
FROM list
WHERE first OR last
WINDOW w AS (ORDER BY timestamp ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
), rec_list (range, previous, data, next, new_data, arr) AS
(
SELECT range
, previous
, data
, next
, data
, array[range]
FROM range_list
WHERE previous IS NULL
UNION ALL
SELECT c.range
, p.data
, c.data
, c.next
, CASE
WHEN p.new_data IS NOT DISTINCT FROM c.next
THEN p.data
ELSE c.data
END
, p.arr || c.range
FROM rec_list AS p
INNER JOIN range_list AS c
ON lower(c.range) = upper(p.range) + interval '15 minutes'
WHERE NOT array[c.range] <# p.arr
AND first
)
SELECT s.*, r.new_data
FROM sensors AS s
INNER JOIN rec_list AS r
ON r.range #> s.timestamp
ORDER BY timestamp
see the test result in dbfiddle

pyspark - converting DF Structure

I am new to Python and Spark Programming.
I have data in Below given format-1, which will have data captured for different fields based on timestamp and trigger.
I need to convert this data into format-2, i.e, based on timestamp and Key, need to group all the fields given in format-1 and created records as per Format-2. In Format-1, there are field that does not have any key value (timestamp and Trigger), these fields should be populated for all the records in format-2
Can you please suggest me the best approach to perform this in pyspark.
Format-1:
Event time (key-1) trig (key-2) data field_Name
------------------------------------------------------
2021-05-01T13:57:29Z 30Sec 10 A
2021-05-01T13:57:59Z 30Sec 11 A
2021-05-01T13:58:29Z 30Sec 12 A
2021-05-01T13:58:59Z 30Sec 13 A
2021-05-01T13:59:29Z 30Sec 14 A
2021-05-01T13:59:59Z 30Sec 15 A
2021-05-01T14:00:29Z 30Sec 16 A
2021-05-01T14:00:48Z OFF 17 A
2021-05-01T13:57:29Z 30Sec 110 B
2021-05-01T13:57:59Z 30Sec 111 B
2021-05-01T13:58:29Z 30Sec 112 B
2021-05-01T13:58:59Z 30Sec 113 B
2021-05-01T13:59:29Z 30Sec 114 B
2021-05-01T13:59:59Z 30Sec 115 B
2021-05-01T14:00:29Z 30Sec 116 B
2021-05-01T14:00:48Z OFF 117 B
2021-05-01T14:00:48Z OFF 21 C
2021-05-01T14:00:48Z OFF 31 D
Null Null 41 E
Null Null 51 F
Format-2:
Event Time Trig A B C D E F
--------------------------------------------------------------
2021-05-01T13:57:29Z 30Sec 10 110 Null Null 41 51
2021-05-01T13:57:59Z 30Sec 11 111 Null Null 41 51
2021-05-01T13:58:29Z 30Sec 12 112 Null Null 41 51
2021-05-01T13:58:59Z 30Sec 13 113 Null Null 41 51
2021-05-01T13:59:29Z 30Sec 14 114 Null Null 41 51
2021-05-01T13:59:59Z 30Sec 15 115 Null Null 41 51
2021-05-01T14:00:29Z 30Sec 16 116 Null Null 41 51
2021-05-01T14:00:48Z OFF 17 117 21 31 41 51

time bucketing with cumsum condition

Hello Fellow Kdb Mortals :D
Stuck on a pretty weird problem here. I have a table like
time col is xbar-ed to 5-mins
time code name count
--------------------------------
00:00 SPY S&P.. 15
00:00 QQQ ... 88
00:00 IWM ... 100
00:00 XLE ... 80
00:05 QQQ ... 20
00:05 SPY ... 75
00:10 QQQ ... 22
00:10 XLE ... 10
00:15 SPY ... 23
.....
.....
23:40 XLE ... 11
23:50 SPY ... 16
23:55 IWM ... 100
23:55 QQQ ... 10
What I want to be returned is a table like (from asc time)
code name stime etime cumcount
------------------------------------------------
SPY S&P... 00:00 00:15 123 <-- 15+75+23
QQQ ... 00:00 00:05 108 <-- 88+20
IWM ... 00:00 00:00 100 <-- 100
XLE ... 00:00 23:40 101 <-- 80+10+11
Notice there is a condition on this time bucket, where the first cumulative sum by (code,name) is greater than or equal to 100.
I can also generate another table from bottoms up (desc time)
code name stime etime cumcount
------------------------------------------------
SPY ... 23:50 20:10 103
QQQ ... 23:55 21:45 118
IWM ... 23:55 23:55 100
XLE ... 23:40 00:00 101 <-- 11+10+80
I have been at this for a couple of hours, but can't get this working. Basic select and sums don't get me anywhere. I could use loops but thought I should check in here first before I go down that lane.
Any help is appreciated :D
Assuming you have a table sorted ascending on time i.e.:
`time xasc `t
Something like this could work
q)t1:update cumcount:sums cnt,stime:first time by code,name from t
q)select code,name,stime,etime:time, cumcount from t1 where cumcount>=100,i=(first;i) fby ([]code;name)
Notice that I have relabelled count as cnt to prevent a clash with the count function that already exists in the q language.
So first you calculate your cumulative count in the update statement.
Then select from the resulting table in such a way that first you pull out only those records where the count is > 100, then you use fby to filter down on this again to pull out the first record for each distinct (code;name) pair.
In this example stime is the time of the first entry for each (code;name) pair and etime will be time when it first exceeds 100.
I prefer Seans solution, but for the sake of alternative:
q)t:update name:string lower code from([]time:"u"$0 0 0 0 5 5 10 10 15 1420 1430 1435 1435;code:`SPY`QQQ`IWM`XLE 0 1 2 3 1 0 1 3 0 3 0 2 1;cnt:15 88 100 80 20 75 22 10 23 11 16 100 10);
q)exec{x x[`cumcnt]binr 100}[([]stime:first time;etime:time;cumcnt:sums cnt)]by code,name from t
code name | stime etime cumcnt
----------| ------------------
IWM "iwm"| 00:00 00:00 100
QQQ "qqq"| 00:00 00:05 108
SPY "spy"| 00:00 00:15 113
XLE "xle"| 00:00 23:40 101
Summing from the bottom would be:
q)exec{x x[`cumcnt]binr 100}[([]stime:last time;etime:reverse time;cumcnt:sums reverse cnt)]by code,name from t
code name | stime etime cumcnt
----------| ------------------
IWM "iwm"| 23:55 23:55 100
QQQ "qqq"| 23:55 00:00 140
SPY "spy"| 23:50 00:05 114
XLE "xle"| 23:40 00:00 101

Add null to the columns which are empty

I am trying to put null to the columns which are empty using perl or awk, to find the number of column , header's column count can be used. I tried to perform the solution using perl and some regex. However, the output looks very close to the desired output but if noticed carefully row number one is showing incorrect data.
Input data:
id name type foo-id zoo-id loo-id-1 moo-id-2
----- --------------- ----------- ------ ------ ------ ------
0 zoo123 soozoo 8 31 32
51 zoo213 soozoo 48 51
52 asz123 soozoo 47 52
53 asw122 soozoo 1003 53
54 fff123 soozoo 68 54
55 sss123 soozoo 75 55
56 ssd123 soozoo 76 56
Expected Output:
0 zoo123 soozoo 8 null 31 32
51 zoo213 soozoo 48 51 null null
52 asz123 soozoo 47 52 null null
53 asw122 soozoo 1003 53 null null
54 fff123 soozoo 68 54 null null
55 sss123 soozoo 75 55 null null
56 ssd123 soozoo 76 56 null null
Very close to solution but row-1 is showing incorrect data:
echo "$x"|grep -E '^[0-9]+' |perl -ne 'm/^([\d]+)(?:\s+([\w]+))?(?:\s+([-\w]+))?(?:\s+([\d]+))?(?:\s+([\d]+))?(?:\s+([\d]+))?(?:\s+([\d]+))?/;printf "%s %s %s %s %s %s %s\n", $1, $2//"null", $3//"null",$4//"null",$5//"null",$6//"null",$7//"null"' |column -t
0 zoo123 soozoo 8 31 32 null
51 zoo213 soozoo 48 51 null null
52 asz123 soozoo 47 52 null null
53 asw122 soozoo 1003 53 null null
54 fff123 soozoo 68 54 null null
55 sss123 soozoo 75 55 null null
56 ssd123 soozoo 76 56 null null
When you have a fixed-width string to parse, you'll find that unpack() is a better tool than regexes.
This should demonstrate how to do it. I'll leave it to you to convert it to a one-liner.
#!/usr/bin/perl
use strict;
use warnings;
use feature 'say';
use Data::Dumper;
while (<DATA>) {
next if /^\D/; # Skip lines that don't start with a digit
# I worked out the unpack() template by counting columns.
my #data = map { /\S/ ? $_ : 'null' } unpack('A7A14A16A8A8A8A8');
say join ' ', #data;
}
__DATA__
id name type foo-id zoo-id loo-id-1 moo-id-2
----- --------------- ----------- ------ ------ ------ ------
0 zoo123 soozoo 8 31 32
51 zoo213 soozoo 48 51
52 asz123 soozoo 47 52
53 asw122 soozoo 1003 53
54 fff123 soozoo 68 54
55 sss123 soozoo 75 55
56 ssd123 soozoo 76 56
Output:
$ perl unpack | column -t
0 zoo123 soozoo 8 null 31 32
51 zoo213 soozoo 48 51 null null
52 asz123 soozoo 47 52 null null
53 asw122 soozoo 1003 53 null null
54 fff123 soozoo 68 54 null null
55 sss123 soozoo 75 55 null null
56 ssd123 soozoo 76 56 null null
With GNU awk:
awk 'NR>2{ # ignore first and second row
NF=7 # fix number of columns
for(i=1; i<=NF; i++) # loop with all columns
if($i ~ /^ *$/){ # if empty or only spaces
$i="null"
}
print $0}' FIELDWIDTHS='7 14 16 8 8 10 8' OFS='|' file | column -s '|' -t
As one line:
awk 'NR>2{NF=7; for(i=1;i<=NF;i++) if($i ~ /^ *$/){$i="null"} print $0}' FIELDWIDTHS='7 14 16 8 8 10 8' OFS='|' file | column -s '|' -t
Output:
0 zoo123 soozoo 8 null 31 32
51 zoo213 soozoo 48 51 null null
52 asz123 soozoo 47 52 null null
53 asw122 soozoo 1003 53 null null
54 fff123 soozoo 68 54 null null
55 sss123 soozoo 75 55 null null
56 ssd123 soozoo 76 56 null null
See: 8 Powerful Awk Built-in Variables – FS, OFS, RS, ORS, NR, NF, FILENAME, FNR

Add unique rows for each group when similar group repeats after certain rows

Hi Can anyone help me please to get unique group number?
I need to give unique rows for each group even when same group repeats after some groups.
I have following data:
id version product startdate enddate
123 0 2443 2010/09/01 2011/01/02
123 1 131 2011/01/03 2011/03/09
123 2 131 2011/08/10 2012/09/10
123 3 3009 2012/09/11 2014/03/31
123 4 668 2014/04/01 2014/04/30
123 5 668 2014/05/01 2016/01/01
123 6 668 2016/01/02 2017/09/08
123 7 131 2017/09/09 2017/10/10
123 8 131 2018/10/11 2019/01/01
123 9 550 2019/01/02 2099/01/01
select *,
dense_rank()over(partition by id order by id,product)
from table
Expected results:
id version product startdate enddate count
123 0 2443 2010/09/01 2011/01/02 1
123 1 131 2011/01/03 2011/03/09 2
123 2 131 2011/08/10 2012/09/10 2
123 3 3009 2012/09/11 2014/03/31 3
123 4 668 2014/04/01 2014/04/30 4
123 5 668 2014/05/01 2016/01/01 4
123 6 668 2016/01/02 2017/09/08 4
123 7 131 2017/09/09 2017/10/10 5
123 8 131 2018/10/11 2019/01/01 5
123 9 550 2019/01/02 2099/01/01 6
Try the following
SELECT
id,version,product,startdate,enddate,
1+SUM(v)OVER(PARTITION BY id ORDER BY version) n
FROM
(
SELECT
*,
IIF(LAG(product)OVER(PARTITION BY id ORDER BY version)<>product,1,0) v
FROM TestTable
) q