averaging data at sub-hourly scale in MATLAB - matlab

I have used the script below to read in my data (Sample below) and am able to compute the hourly and daily mean of Heat flux (H) by use of accumarraying the date and time stamps. The difficulty is that I also want to accumarray for 15 minute, 30 minute etc averages. How can one do this with the kind of data I have ?
LASfile=fopen('../../data/heat.txt');
Data = textscan(LASfile, '%16c %*24c %s %s %f %f %f %d %d %d %d','headerlines',1);
fclose(LASfile);
H = Data {6};
%%
date_num = datestr(Data{1});
formatIn = 'dd-mmm-yyyy HH:MM:SS';
DateVector = datevec(date_num, formatIn);
%%
% Group by day and hour
[unDates, ~, subs] = unique(DateVector(:,1:4),'rows');
% Accumulate by day
[unDates accumarray(subs, H, [], #mean)]; %computes hourly heat flux
#timeStamp date time Cn2 CT2 H
2012-02-07 11:56:00 2/7/2012 11:56:00 3.11E-13 3.64E-01 330.5
2012-02-07 11:57:00 2/7/2012 11:57:00 2.22E-13 2.60E-01 256.4
2012-02-07 11:58:00 2/7/2012 11:58:00 2.92E-13 3.42E-01 315.3
2012-02-07 11:59:00 2/7/2012 11:59:00 4.07E-13 4.77E-01 404.4
2012-02-07 12:00:00 2/7/2012 12:00:00 3.56E-13 4.17E-01 365.7
2012-02-07 12:01:00 2/7/2012 12:01:00 4.41E-13 5.17E-01 429.3
2012-02-07 12:02:00 2/7/2012 12:02:00 4.23E-13 4.96E-01 416.3
2012-02-07 12:03:00 2/7/2012 12:03:00 3.17E-13 3.72E-01 335.3
2012-02-07 12:04:00 2/7/2012 12:04:00 3.42E-13 4.00E-01 354.7
2012-02-07 12:05:00 2/7/2012 12:05:00 3.43E-13 4.02E-01 355.6
2012-02-07 12:07:00 2/7/2012 12:07:00 2.92E-13 3.42E-01 315.3
2012-02-07 12:08:00 2/7/2012 12:08:00 2.63E-13 3.09E-01 291.7
2012-02-07 12:09:00 2/7/2012 12:09:00 2.45E-13 2.87E-01 276.1
2012-02-07 12:10:00 2/7/2012 12:10:00 3.00E-13 3.52E-01 321.8
2012-02-07 12:11:00 2/7/2012 12:11:00 3.77E-13 4.42E-01 382
2012-02-07 12:12:00 2/7/2012 12:12:00 4.40E-13 5.16E-01 428.9
2012-02-07 12:13:00 2/7/2012 12:13:00 3.60E-13 4.22E-01 369.2
2012-02-07 12:14:00 2/7/2012 12:14:00 4.56E-13 5.35E-01 440.4
2012-02-07 12:15:00 2/7/2012 12:15:00 3.62E-13 4.24E-01 370.5
2012-02-07 12:16:00 2/7/2012 12:16:00 3.48E-13 4.07E-01 359.3
2012-02-07 12:17:00 2/7/2012 12:17:00 3.94E-13 4.62E-01 394.9
2012-02-07 12:18:00 2/7/2012 12:18:00 3.53E-13 4.14E-01 363.5
2012-02-07 12:19:00 2/7/2012 12:19:00 4.47E-13 5.24E-01 433.6
2012-02-07 12:20:00 2/7/2012 12:20:00 4.33E-13 5.07E-01 423.6
2012-02-07 12:21:00 2/7/2012 12:21:00 3.18E-13 3.73E-01 336
2012-02-07 12:22:00 2/7/2012 12:22:00 2.91E-13 3.41E-01 314.7
2012-02-07 12:23:00 2/7/2012 12:23:00 2.71E-13 3.17E-01 297.8
2012-02-07 12:24:00 2/7/2012 12:24:00 3.72E-13 4.36E-01 378.2
2012-02-07 12:25:00 2/7/2012 12:25:00 3.25E-13 3.81E-01 341.8
2012-02-07 12:26:00 2/7/2012 12:26:00 3.66E-13 4.29E-01 373.3
2012-02-07 12:27:00 2/7/2012 12:27:00 3.95E-13 4.63E-01 395.3
2012-02-07 12:28:00 2/7/2012 12:28:00 3.73E-13 4.37E-01 378.9
2012-02-07 12:29:00 2/7/2012 12:29:00 3.31E-13 3.89E-01 346.7
2012-02-07 12:30:00 2/7/2012 12:30:00 3.05E-13 3.57E-01 325.7

You should include the fifth element of DateVector rounded to what you need. For example, to use 15-min periods:
DateVector2 = DateVector(:,1:5);
DateVector2(:,5) = floor(DateVector(:,5)/15);
And then you accumarray based on this DateVector2:
[unDates, ~, subs] = unique(DateVector2,'rows');
% Accumulate by day
[unDates accumarray(subs, H, [], #mean)]; %computes average heat flux

Related

How to extract the whole hours from a time range in Postgresql and get the duration of each extracted hour

I'm new to database (even more to postgres), so if you can help me. I have a table something like this:
id_interaction
start_time
end_time
0001
2022-06-03 12:40:10
2022-06-03 12:45:16
0002
2022-06-04 10:50:40
2022-06-04 11:10:12
0003
2022-06-04 16:30:00
2022-06-04 18:20:00
0004
2022-06-05 23:00:00
2022-06-06 10:30:12
Basically I need to create a query to get the duration doing a separation by hours, for example:
id_interaction
start_time
end_time
hour
duration
0001
2022-06-03 12:40:10
2022-06-03 12:45:16
12:00:00
00:05:06
0002
2022-06-04 10:50:40
2022-06-04 11:10:12
10:00:00
00:09:20
0002
2022-06-04 10:50:40
2022-06-04 11:10:12
11:00:00
00:10:12
0003
2022-06-04 16:30:00
2022-06-04 18:20:00
16:00:00
00:30:00
0003
2022-06-04 16:30:00
2022-06-04 18:20:00
17:00:00
01:00:00
0003
2022-06-04 16:30:00
2022-06-04 18:20:00
18:00:00
00:20:00
0004
2022-06-05 23:00:00
2022-06-06 03:30:12
23:00:00
01:00:00
0004
2022-06-05 23:00:00
2022-06-06 03:30:12
24:00:00
01:00:00
0004
2022-06-05 23:00:00
2022-06-06 03:30:12
01:00:00
01:00:00
0004
2022-06-05 23:00:00
2022-06-06 03:30:12
02:00:00
01:00:00
0004
2022-06-05 23:00:00
2022-06-06 03:30:12
03:00:00
00:30:12
I need all the hours from start to finish. For example: if an id starts at 17:10 and ends at 19:00, I need the duration of 17:00, 18:00 and 19:00
If you're trying to get the duration in each whole hour interval overlapped by your data, this can be achieved by rounding timestamps using date_trunc(), using generate_series() to move around the intervals and casting between time, interval and timestamp:
create or replace function hours_crossed(starts timestamp,ends timestamp)
returns integer
language sql as '
select case
when date_trunc(''hour'',starts)=date_trunc(''hour'',ends)
then 0
when date_trunc(''hour'',starts)=starts
then floor(extract(''epoch'' from ends-starts)::numeric/60.0/60.0)
else floor(extract(''epoch'' from ends-starts)::numeric/60.0/60.0) +1
end';
select * from (
select
id_interacao,
tempo_inicial,
tempo_final,
to_char(hora, 'HH24:00')::time as hora,
least(tempo_final, hora + '1 hour'::interval)
- greatest(tempo_inicial, hora)
as duracao
from (select
*,
date_trunc('hour',tempo_inicial)
+ (generate_series(0, hours_crossed(tempo_inicial,tempo_final))::text||' hours')::interval
as hora
from test_times
) a
) a
where duracao<>'0'::interval;
This also fixes your first entry that lasts 5 minutes but shows as 45.
You'll need to decide how you want to handle zero-length intervals and ones that end on an exact hour - I added a condition to skip them. Here's a working example.

Postgresql group by recurring items

I'm using postgresql to store historical data coming from an RTLS platform.
Position data is not collected continuosly.
The historical_movements is implemented as a single table as follow (it is a simplified table but enough to present the use case):
User Area EnterTime ExitTime
John room1 2018-01-01 10:00:00 2018-01-01 10:00:05
Doe room1 2018-01-01 10:00:00 2018-01-01 10:10:00
John room1 2018-01-01 10:05:00 2018-01-01 10:10:00
Doe room1 2018-01-01 10:20:00 2018-01-01 10:30:00
John room2 2018-01-01 11:00:00 2018-01-01 11:05:00
John room2 2018-01-01 11:08:00 2018-01-01 11:15:00
John room1 2018-01-01 12:00:00 2018-01-01 12:08:00
John room1 2018-01-01 12:10:00 2018-01-01 12:20:00
John room1 2018-01-01 12:25:00 2018-01-01 12:25:00
John room3 2018-01-01 12:30:00 2018-01-01 12:35:00
John room3 2018-01-01 12:40:00 2018-01-01 12:50:00
I'm looking at a way to make a query showing the user staying in the various rooms, aggregating the data related to the same room and computing the overall staying time, as follows
User Area EnterTime ExitTime ArregateTime
John room1 2018-01-01 10:00:00 2018-01-01 10:10:00 00:10:00
John room2 2018-01-01 11:00:00 2018-01-01 11:05:00 00:15:00
John room1 2018-01-01 12:00:00 2018-01-01 12:25:00 00:25:00
John room3 2018-01-01 12:30:00 2018-01-01 12:50:00 00:20:00
Doe room1 2018-01-01 10:00:00 2018-01-01 10:30:00 00:30:00
Looking at various threads I'm quite sure I'd have to use lag and partition by functions but it's not clear how.
Any hints?
Best regards.
AggregateTime isn't really an aggregate in your expected result. It seems to be a difference between max_time and min_time for each block where each block is set of contiguous rows with same (users, area).
with block as(
select users, area, entertime, exittime,
(row_number() over (order by users, entertime) -
row_number() over (partition by users, area order by entertime)
) as grp
from your_table
order by 1,2,3
)
select users, area, entertime, exittime, (exittime - entertime) as duration
from (select users, area, grp, min(entertime) as entertime, max(exittime) as exittime
from block
group by users, area, grp
) t2
order by 5;
I made some changes to 'Resetting Row number according to record data change' to arrive at the solution.

How to get rows between time intervals

I have delivery slots that has a from column (datetime).
Delivery slots are stored as 1 hour to 1 hour and 30 minute intervals, daily.
i.e. 3.00am-4.30am, 6.00am-7.30am, 9.00am-10.30am and so forth
id | from
------+---------------------
1 | 2016-01-01 03:00:00
2 | 2016-01-01 04:30:00
3 | 2016-01-01 06:00:00
4 | 2016-01-01 07:30:00
5 | 2016-01-01 09:00:00
6 | 2016-01-01 10:30:00
7 | 2016-01-01 12:00:00
8 | 2016-01-02 03:00:00
9 | 2016-01-02 04:30:00
10 | 2016-01-02 06:00:00
11 | 2016-01-02 07:30:00
12 | 2016-01-02 09:00:00
13 | 2016-01-02 10:30:00
14 | 2016-01-02 12:00:00
I’m trying to get all delivery_slots between the hours of 3.00am - 4.30 am. Ive got the following so far:
SELECT * FROM delivery_slots WHERE EXTRACT(HOUR FROM delivery_slots.from) >= 3 AND EXTRACT(MINUTE FROM delivery_slots.from) >= 0 AND EXTRACT(HOUR FROM delivery_slots.from) <= 4 AND EXTRACT(MINUTE FROM delivery_slots.from) <= 30;
Which kinda works. Kinda, because it is only returning delivery slots that have minutes of 00.
Thats because of the last where condition (EXTRACT(MINUTE FROM delivery_slots.from) <= 30)
To give you an idea, of what I am trying to expect:
id | from
-------+---------------------
1 | 2016-01-01 03:00:00
2 | 2016-01-01 04:30:00
8 | 2016-01-02 03:00:00
9 | 2016-01-02 04:30:00
15 | 2016-01-03 03:00:00
16 | 2016-01-03 04:30:00
etc...
Is there a better way to go about this?
Try this: (not tested)
SELECT * FROM delivery_slots WHERE delivery_slots.from::time >= '03:00:00' AND delivery_slots.from::time <= '04:30:00'
Hope this helps.
Cheers.
The easiest way to do this, in my mind, is to cast the from column as a type time and do a where >= and <=, like so
select * from testing where (date::time >= '3:00'::time and date::time <= '4:30'::time);

SQL-Server calculating how many instances on a day

I have a table that has ID, start date, and end date
Start_date End_Date ID
2016-03-01 06:30:00.000 2016-03-07 17:30:00.000 782772
2016-03-01 09:09:00.000 2016-03-07 10:16:00.000 782789
2016-03-01 11:17:00.000 2016-03-08 20:10:00.000 782882
2016-03-01 12:22:00.000 2016-03-21 19:40:00.000 782885
2016-03-01 13:15:00.000 2016-03-24 13:37:00.000 783000
2016-03-01 13:23:00.000 2016-03-07 19:15:00.000 782964
2016-03-01 13:55:00.000 2016-03-14 15:45:00.000 782972
2016-03-01 14:05:00.000 2016-03-07 20:32:00.000 783065
2016-03-01 18:06:00.000 2016-03-09 12:42:00.000 782988
2016-03-01 19:05:00.000 2016-04-01 20:00:00.000 782942
2016-03-01 19:15:00.000 2016-03-10 13:30:00.000 782940
2016-03-01 19:15:00.000 2016-03-07 18:00:00.000 783111
2016-03-01 20:10:00.000 2016-03-08 14:05:00.000 783019
2016-03-01 22:15:00.000 2016-03-24 12:46:00.000 782979
2016-03-02 08:00:00.000 2016-03-08 09:02:00.000 783222
2016-03-02 09:31:00.000 2016-03-15 09:16:00.000 783216
2016-03-02 11:04:00.000 2016-03-19 18:49:00.000 783301
2016-03-02 11:23:00.000 2016-03-14 19:49:00.000 783388
2016-03-02 11:46:00.000 2016-03-08 18:10:00.000 783368
2016-03-02 12:03:00.000 2016-03-23 08:44:00.000 783246
2016-03-02 12:23:00.000 2016-03-11 14:45:00.000 783302
2016-03-02 12:24:00.000 2016-03-12 15:30:00.000 783381
2016-03-02 12:30:00.000 2016-03-09 13:58:00.000 783268
2016-03-02 13:00:00.000 2016-03-10 11:30:00.000 783391
2016-03-02 13:35:00.000 2016-03-17 04:40:00.000 783309
2016-03-02 15:05:00.000 2016-04-04 11:52:00.000 783295
2016-03-02 15:08:00.000 2016-03-15 16:15:00.000 783305
2016-03-02 15:32:00.000 2016-03-08 14:20:00.000 783384
2016-03-02 16:49:00.000 2016-03-08 11:40:00.000 783367
2016-03-02 16:51:00.000 2016-03-11 16:00:00.000 783387
2016-03-02 18:00:00.000 2016-03-10 17:00:00.000 783242
2016-03-02 18:37:00.000 2016-03-25 13:30:00.000 783471
2016-03-02 18:45:00.000 2016-03-11 20:15:00.000 783498
2016-03-02 19:41:00.000 2016-03-17 12:34:00.000 783522
2016-03-02 20:08:00.000 2016-03-22 15:30:00.000 783405
2016-03-02 20:16:00.000 2016-03-31 12:30:00.000 783512
2016-03-02 21:45:00.000 2016-03-15 12:25:00.000 783407
2016-03-03 09:59:00.000 2016-03-09 15:00:00.000 783575
2016-03-03 11:18:00.000 2016-03-16 10:30:00.000 783570
2016-03-03 11:27:00.000 2016-03-15 17:28:00.000 783610
2016-03-03 11:36:00.000 2016-03-11 16:05:00.000 783572
2016-03-03 11:55:00.000 2016-03-10 20:15:00.000 783691
2016-03-03 12:10:00.000 2016-03-09 19:50:00.000 783702
2016-03-03 12:11:00.000 2016-03-15 14:08:00.000 783611
2016-03-03 12:55:00.000 2016-03-10 11:50:00.000 783571
2016-03-03 13:20:00.000 2016-04-20 20:37:00.000 783856
2016-03-03 14:08:00.000 2016-03-10 16:00:00.000 783728
2016-03-03 15:10:00.000 2016-03-10 17:00:00.000 783727
2016-03-03 15:20:00.000 2016-03-17 15:14:00.000 783768
2016-03-03 16:55:00.000 2016-03-09 14:09:00.000 783812
2016-03-03 17:00:00.000 2016-03-12 12:33:00.000 783978
2016-03-03 17:17:00.000 2016-03-10 16:00:00.000 783729
2016-03-03 17:42:00.000 2016-03-10 12:13:00.000 783975
2016-03-03 18:23:00.000 2016-03-09 17:00:00.000 783820
2016-03-03 18:31:00.000 2016-03-11 14:00:00.000 783891
2016-03-03 18:59:00.000 2016-03-10 17:00:00.000 783772
2016-03-03 19:48:00.000 2016-03-11 17:30:00.000 783724
2016-03-03 19:50:00.000 2016-03-09 18:00:00.000 783829
2016-03-03 20:48:00.000 2016-03-11 11:04:00.000 783745
2016-03-03 23:00:00.000 2016-03-13 10:59:00.000 783983
2016-03-04 02:50:00.000 2016-03-10 10:45:00.000 783991
2016-03-04 11:25:00.000 2016-03-14 14:50:00.000 784102
2016-03-04 11:28:00.000 2016-03-18 16:21:00.000 784011
2016-03-04 12:01:00.000 2016-03-11 13:20:00.000 784014
2016-03-04 12:15:00.000 2016-03-11 08:00:00.000 784004
2016-03-04 13:06:00.000 2016-03-11 15:00:00.000 784012
2016-03-04 13:37:00.000 2016-03-10 18:00:00.000 784200
2016-03-04 13:52:00.000 2016-04-22 21:30:00.000 784132
2016-03-04 14:11:00.000 2016-03-14 19:00:00.000 784136
2016-03-04 14:17:00.000 2016-03-11 16:52:00.000 784176
2016-03-04 14:42:00.000 2016-03-13 15:25:00.000 784070
2016-03-04 16:00:00.000 2016-03-11 17:30:00.000 784655
2016-03-04 16:30:00.000 2016-03-10 23:30:00.000 784652
2016-03-04 17:25:00.000 2016-03-22 14:00:00.000 784028
2016-03-04 19:50:00.000 2016-03-10 12:42:00.000 784303
2016-03-04 20:00:00.000 2016-03-10 16:13:00.000 784006
2016-03-04 21:30:00.000 2016-03-10 18:00:00.000 784042
2016-03-04 22:25:00.000 2016-04-02 19:40:00.000 784044
2016-03-04 22:40:00.000 2016-03-15 17:30:00.000 784276
2016-03-04 22:55:00.000 2016-03-13 13:50:00.000 784257
2016-03-04 23:10:00.000 2016-03-15 13:19:00.000 784266
2016-03-05 10:30:00.000 2016-03-11 07:45:00.000 784295
2016-03-05 10:30:00.000 2016-03-16 19:00:00.000 784305
2016-03-05 11:05:00.000 2016-03-17 15:26:00.000 784320
2016-03-05 12:30:00.000 2016-03-14 11:25:00.000 784368
2016-03-05 12:50:00.000 2016-03-17 13:27:00.000 784419
2016-03-05 13:01:00.000 2016-03-11 17:00:00.000 784298
2016-03-05 14:34:00.000 2016-03-11 19:00:00.000 784286
2016-03-05 14:45:00.000 2016-04-07 12:01:00.000 784316
2016-03-05 16:00:00.000 2016-03-24 17:00:00.000 784334
2016-03-05 19:22:00.000 2016-04-12 15:56:00.000 784335
2016-03-05 19:25:00.000 2016-03-14 11:59:00.000 784346
2016-03-05 19:25:00.000 2016-03-11 16:10:00.000 784399
2016-03-05 20:15:00.000 2016-03-15 16:20:00.000 784362
2016-03-05 20:26:00.000 2016-03-12 15:03:00.000 784347
2016-03-05 23:30:00.000 2016-03-17 16:45:00.000 784476
2016-03-06 11:57:00.000 2016-03-15 21:00:00.000 784524
2016-03-06 13:17:00.000 2016-03-29 08:09:00.000 784472
2016-03-06 14:07:00.000 2016-03-15 13:55:00.000 784497
2016-03-06 15:00:00.000 2016-03-16 12:24:00.000 784474
What i am looking to do is for ever day I get a count of how many entries occur
Example Output
date Instances
01/03/2016 113
02/03/2016 100
03/03/2016 106
04/03/2016 127
05/03/2016 81
06/03/2016 59
07/03/2016 115
08/03/2016 104
09/03/2016 92
10/03/2016 105
11/03/2016 128
12/03/2016 71
13/03/2016 64
14/03/2016 99
15/03/2016 106
16/03/2016 101
17/03/2016 96
18/03/2016 127
19/03/2016 75
20/03/2016 62
21/03/2016 93
22/03/2016 109
23/03/2016 102
24/03/2016 104
25/03/2016 85
26/03/2016 87
27/03/2016 72
28/03/2016 61
29/03/2016 86
30/03/2016 90
31/03/2016 122
This is the query i am using is
with [dates] as (
select convert(datetime, '2016-01-01') as [date] --start
union all
select dateadd(day, 1, [date])
from [dates]
where [date] < GETDATE() --end
)
select [date]
,Sum (Case when [date] between ws._start_dttm and Case when Cast(ws.End_DTTM as date) is null then [date]
else Cast(ws._End_DTTM as date) end then 1 else 0 end)
from [dates]
Join [STAYS] ws on Case when Cast(ws.End_DTTM as date) is null then GETDATE()-1
else Cast(ws.End_DTTM as date) end = dates.date
where END_DTTM between '2016-01-01' and GETDATE()
Group BY date
Order by [date]
option (maxrecursion 0)
however am not getting the right answer as this currently done in Excel:
Date Instances
01/03/2016 343
02/03/2016 326
03/03/2016 327
04/03/2016 332
05/03/2016 318
06/03/2016 317
07/03/2016 337
08/03/2016 332
09/03/2016 345
10/03/2016 349
11/03/2016 341
12/03/2016 323
13/03/2016 333
14/03/2016 349
15/03/2016 344
16/03/2016 358
17/03/2016 349
18/03/2016 350
19/03/2016 347
20/03/2016 351
21/03/2016 371
22/03/2016 369
23/03/2016 340
24/03/2016 335
25/03/2016 319
26/03/2016 341
27/03/2016 355
28/03/2016 351
29/03/2016 367
30/03/2016 379
31/03/2016 385
Updated as Per Op comment:
In summary for below row
Start_date End_Date ID
2016-03-01 06:30:00.000 2016-03-07 17:30:00.000 782772
Expected output would be:
01/03/2016 1
02/03/2016 1
03/03/2016 1
04/03/2016 1
05/03/2016 1
06/03/2016 1
07/03/2016 1
Like this i want to calculate for all rows per date
select convert(varchar(10),startdate,103) as datee,count(*) as occurences
from table
group by convert(varchar(10),startdate,103)
Update:
Try this
;with cte
as
(
select
startdate,enddate
datediff(day,enddate,startdate) as cnt
from
table
)
select
convert(varchar(10),startdate,103)as date,
sum(cnt)
from
cte
group by
convert(varchar(10),startdate,103)

How to resample time vector data matlab

I have to resample the following cell array:
dateS =
'2004-09-02 06:00:00'
'2004-09-02 07:30:00'
'2004-09-02 12:00:00'
'2004-09-02 18:00:00'
'2004-09-02 19:30:00'
'2004-09-03 00:00:00'
'2004-09-03 05:30:00'
'2004-09-03 06:00:00'
following an irregular spacing, e.g. between 1st and 2nd rows there are 5 readings, while between 2 and 3rd there are 10. The number of intermediates 'readings' are stored in a vector 'v'. So, what I need is a new vector with all the intermediate dates/times in the same format at dateS.
EDIT:
There's 1h30min = 90min between the first 2 readings in the list. Five intervals b/w them amounts to 90 mins / 5 = 18 mins. Now insert five 'readings' between (1) and (2), each separated by 18mins. I need to do that for all the dateS.
Any ideas? Thanks!
You can interpolate the serial dates with interp1():
% Inputs
dates = [
'2004-09-02 06:00:00'
'2004-09-02 07:30:00'
'2004-09-02 12:00:00'
'2004-09-02 18:00:00'
'2004-09-02 19:30:00'
'2004-09-03 00:00:00'
'2004-09-03 05:30:00'
'2004-09-03 06:00:00'];
v = [5 4 3 2 4 5 3];
% Serial dates
serdates = datenum(dates,'yyyy-mm-dd HH:MM:SS');
% Interpolate
x = cumsum([1 v]);
resampled = interp1(x, serdates, x(1):x(end))';
The result:
datestr(resampled)
ans =
02-Sep-2004 06:00:00
02-Sep-2004 06:18:00
02-Sep-2004 06:36:00
02-Sep-2004 06:54:00
02-Sep-2004 07:12:00
02-Sep-2004 07:30:00
02-Sep-2004 08:37:30
02-Sep-2004 09:45:00
02-Sep-2004 10:52:30
02-Sep-2004 12:00:00
02-Sep-2004 14:00:00
02-Sep-2004 16:00:00
02-Sep-2004 18:00:00
02-Sep-2004 18:45:00
02-Sep-2004 19:30:00
02-Sep-2004 20:37:30
02-Sep-2004 21:45:00
02-Sep-2004 22:52:30
03-Sep-2004 00:00:00
03-Sep-2004 01:06:00
03-Sep-2004 02:12:00
03-Sep-2004 03:18:00
03-Sep-2004 04:24:00
03-Sep-2004 05:30:00
03-Sep-2004 05:40:00
03-Sep-2004 05:50:00
03-Sep-2004 06:00:00
The following code does what you want (I picked arbitrary values for v - as long as the number of elements in vector v is one less than the number of entries in dateS this should work):
dateS = [
'2004-09-02 06:00:00'
'2004-09-02 07:30:00'
'2004-09-02 12:00:00'
'2004-09-02 18:00:00'
'2004-09-02 19:30:00'
'2004-09-03 00:00:00'
'2004-09-03 05:30:00'
'2004-09-03 06:00:00'];
% "stations":
v = [6 5 4 3 5 6 4];
dn = datenum(dateS);
df = diff(dn)'./v;
newDates = [];
for ii = 1:numel(v)
newDates = [newDates dn(ii) + (0:v(ii))*df(ii)];
end
newStrings = datestr(newDates, 'yyyy-mm-dd HH:MM:SS');
The array newStrings ends up containing the following: for example, you can see that the interval between the first and second time has been split into 6 15 minute segments
2004-09-02 06:00:00
2004-09-02 06:15:00
2004-09-02 06:30:00
2004-09-02 06:45:00
2004-09-02 07:00:00
2004-09-02 07:15:00
2004-09-02 07:30:00
2004-09-02 08:24:00
2004-09-02 09:18:00
2004-09-02 10:12:00
2004-09-02 11:06:00
2004-09-02 12:00:00
2004-09-02 13:30:00
2004-09-02 15:00:00
2004-09-02 16:30:00
2004-09-02 18:00:00
2004-09-02 18:30:00
2004-09-02 19:00:00
2004-09-02 19:30:00
2004-09-02 20:24:00
2004-09-02 21:18:00
2004-09-02 22:12:00
2004-09-02 23:06:00
2004-09-03 00:00:00
2004-09-03 00:55:00
2004-09-03 01:50:00
2004-09-03 02:45:00
2004-09-03 03:40:00
2004-09-03 04:35:00
2004-09-03 05:30:00
2004-09-03 05:37:30
2004-09-03 05:45:00
2004-09-03 05:52:30
The code relies on a few concepts:
A date can be represented as a string or a datenum. I use built in functions to go between them
Once you have the date/time as a number, it is easy to interpolate
I use the diff function to find the difference between successive times
I don't attempt to "vectorize" the code - you were not asking for efficient code, and for an example like this the clarity of a for loop trumps everything.