MATLAB: Dividing a year-length varying-resolution time vector into months - matlab

I have a time series in the following format:
time data value
733408.33 x1
733409.21 x2
733409.56 x3
etc..
The data runs from approximately 01-Jan-2008 to 31-Dec-2010.
I want to separate the data into columns of monthly length.
For example the first column (January 2008) will comprise of the corresponding data values:
(first 01-Jan-2008 data value):(data value immediately preceding the first 01-Feb-2008 value)
Then the second column (February 2008):
(first 01-Feb-2008 data value):(data value immediately preceding the first 01-Mar-2008 value)
et cetera...
Some ideas I've been thinking of but don't know how to put together:
Convert all serial time numbers (e.g. 733408.33) to character strings with datestr
Use strmatch('01-January-2008',DatesInChars) to find the indices of the rows corresponding to 01-January-2008
Tricky part (?): TransformedData(:,i) = OriginalData(start:end) ? end = strmatch(1) - 1 and start = 1. Then change start at the end of the loop to strmatch(1) and then run step 2 again to find the next "starting index" and change end to the "new" strmatch(1)-1 ?
Having it speed optimized would be nice; I am going to apply it on data sampled ~2 million times.
Thanks!

I would use histc with a list a list of last days of the month as the second parameter (Note: use histc with the two return functions).
The edge list can easily be created with datenum or datevec.
This way you don't have operation on string and you that should be fast.
EDIT:
Example with result in a simple data structure (including some code from #Rody):
% Generate some test times/data
tstart = datenum('01-Jan-2008');
tend = datenum('31-Dec-2010');
tspan = tstart : tend;
tspan = tspan(:) + randn(size(tspan(:))); % add some noise so it's non-uniform
data = randn(size(tspan));
% Generate list of edge
edge = [];
for y = 2008:2010
for m = 1:12
edge = [edge datenum(y, m, 1)];
end
end
% Histogram
[number, bin] = histc(tspan, edge);
% Setup of result
result = {};
for n = 1:length(edge)
result{n} = [tspan(bin == n), data(bin == n)];
end
% Test
% 04-Aug-2008 17:25:20
datestr(result{8}(4,1))
tspan(data == result{8}(4,2))
datestr(tspan(data == result{8}(4,2)))

Assuming you have sorted, non-equally-spaced date numbers, the way to go here is to put the relevant data in a cell array, so that each entry corresponds to the next month, and can hold a different amount of elements.
Here's how to do that quite efficiently:
% generate some test times/data
tstart = datenum('01-Jan-2008');
tend = datenum('31-Dec-2010');
tspan = tstart : tend;
tspan = tspan(:) + randn(size(tspan(:))); % add some noise so it's non-uniform
data = randn(size(tspan));
% find month numbers
[~,M] = datevec(tspan);
% find indices where the month changes
inds = find(diff([0; M]));
% extract data in columns
sz = numel(inds)-1;
cols = cell(sz,1);
for ii = 1:sz-1
cols{ii} = data( inds(ii) : inds(ii+1)-1 );
end
Note that it can be difficult to determine which entry in cols belongs to which month, year, so here's how to do it in a more human-readable way:
% change this line:
[y,M] = datevec(tspan);
% and change these lines:
cols = cell(sz,3);
for ii = 1:sz-1
cols{ii,1} = data( inds(ii) : inds(ii+1)-1 );
% also store the year and month
cols{ii,2} = y(inds(ii));
cols{ii,3} = M(inds(ii));
end

I'll assume you have a timeVals an Nx1 double vector holding the time value of each datum. Assuming data is also an Nx1 array. I also assume data and timeVals are sorted according to time: that is, the samples you have are ordered according to the time they were taken.
How about:
subs = #(x,i) x(:,i);
months = subs( datevec(timeVals), 2 ); % extract the month of year as a number from the time
r = find( months ~= [months(2:end), months(end)+1] );
monthOfCell = months( r );
r( 2:end ) = r( 2:end ) - r( 1:end-1 );
dataByMonth = mat2cell( data', r ); % might need to transpose data or r here...
timeByMonth = mat2cell( timeVal', r );
After running this code, you have a cell array dataByMonth each cell contains all data relevant to a specific month. The corresponding cell of timeByMonth holds the sampling times of the data of the respective month. Finally, monthOfCell tells you what is the month's number (1-12) of each cell.

Related

Fast way to get mean values of rows accordingly to subscripts

I have a data, which may be simulated in the following way:
N = 10^6;%10^8;
K = 10^4;%10^6;
subs = randi([1 K],N,1);
M = [randn(N,5) subs];
M(M<-1.2) = nan;
In other words, it is a matrix, where the last row is subscripts.
Now I want to calculate nanmean() for each subscript. Also I want to save number of rows for each subscript. I have a 'dummy' code for this:
uniqueSubs = unique(M(:,6));
avM = nan(numel(uniqueSubs),6);
for iSub = 1:numel(uniqueSubs)
tmpM = M(M(:,6)==uniqueSubs(iSub),1:5);
avM(iSub,:) = [nanmean(tmpM,1) size(tmpM,1)];
end
The problem is, that it is too slow. I want it to work for N = 10^8 and K = 10^6 (see commented part in the definition of these variables.
How can I find the mean of the data in a faster way?
This sounds like a perfect job for findgroups and splitapply.
% Find groups in the final column
G = findgroups(M(:,6));
% function to apply per group
fcn = #(group) [mean(group, 1, 'omitnan'), size(group, 1)];
% Use splitapply to apply fcn to each group in M(:,1:5)
result = splitapply(fcn, M(:, 1:5), G);
% Check
assert(isequaln(result, avM));
M = sortrows(M,6); % sort the data per subscript
IDX = diff(M(:,6)); % find where the subscript changes
tmp = find(IDX);
tmp = [0 ;tmp;size(M,1)]; % add start and end of data
for iSub= 2:numel(tmp)
% Calculate the mean over just a single subscript, store in iSub-1
avM2(iSub-1,:) = [nanmean(M(tmp(iSub-1)+1:tmp(iSub),1:5),1) tmp(iSub)-tmp(iSub-1)];tmp(iSub-1)];
end
This is some 60 times faster than your original code on my computer. The speed-up mainly comes from presorting the data and then finding all locations where the subscript changes. That way you do not have to traverse the full array each time to find the correct subscripts, but rather you only check what's necessary each iteration. You thus calculate the mean over ~100 rows, instead of first having to check in 1,000,000 rows whether each row is needed that iteration or not.
Thus: in the original you check numel(uniqueSubs), 10,000 in this case, whether all N, 1,000,000 here, numbers belong to a certain category, which results in 10^12 checks. The proposed code sorts the rows (sorting is NlogN, thus 6,000,000 here), and then loop once over the full array without additional checks.
For completion, here is the original code, along with my version, and it shows the two are the same:
N = 10^6;%10^8;
K = 10^4;%10^6;
subs = randi([1 K],N,1);
M = [randn(N,5) subs];
M(M<-1.2) = nan;
uniqueSubs = unique(M(:,6));
%% zlon's original code
avM = nan(numel(uniqueSubs),7); % add the subscript for comparison later
tic
uniqueSubs = unique(M(:,6));
for iSub = 1:numel(uniqueSubs)
tmpM = M(M(:,6)==uniqueSubs(iSub),1:5);
avM(iSub,:) = [nanmean(tmpM,1) size(tmpM,1) uniqueSubs(iSub)];
end
toc
%%%%% End of zlon's code
avM = sortrows(avM,7); % Sort for comparison
%% Start of Adriaan's code
avM2 = nan(numel(uniqueSubs),6);
tic
M = sortrows(M,6);
IDX = diff(M(:,6));
tmp = find(IDX);
tmp = [0 ;tmp;size(M,1)];
for iSub = 2:numel(tmp)
avM2(iSub-1,:) = [nanmean(M(tmp(iSub-1)+1:tmp(iSub),1:5),1) tmp(iSub)-tmp(iSub-1)];
end
toc %tic/toc should not be used for accurate timing, this is just for order of magnitude
%%%% End of Adriaan's code
all(avM(:,1:6) == avM2) % Do the comparison
% End of script
% Output
Elapsed time is 58.561347 seconds.
Elapsed time is 0.843124 seconds. % ~70 times faster
ans =
1×6 logical array
1 1 1 1 1 1 % i.e. the matrices are equal to one another

ind2sub from finding value in a 3D array

I have a dataset time_local that is size 144x91x8845 (lon x lat x hour). I want to find the indices at which a particular hour occurs.
data stores the data
time_local stores the hours that the data occurs at. Due to time zone differences, not all the hours in each 144x91 face is the same
% One year
first = datenum([num2str(years(y)),'-01-01 00:00:00']);
last = datenum([num2str(years(y)),'-12-31 23:00:00']);
dt=1/24;
subset = first:dt:last;
% Find where hour one occurs (want all hours, but starting with 1 hour)
ind = find(time_local == subset(1)); % Hour 1
% Want to save out a new data matrix with the data from hour 1
[a,b,~] = size(time_local);
ind = find(time_local == subset(1)); % Day 1
[x1,y1,z1] = ind2sub(size(time_local),ind);
ind1 = [x1,y1,z1];
data1 = NaN(a,b,length(subset)); % Preallocate new array
data1(:,:,1) = data(ind1(1,:)); % Pull out data where
ind gives me the linear indices, but I want to know the subscript indices so I can save data1 out where each 144x91 face is one hour. Right now, the ind2sub does not seem to be finding the right indices because the time_local that comes out from the indices is not correct.
Edit: I tried the following, which doesn't quite work because the time_local1 and data saved out isn't indexed correctly, but it's close. There must be a more efficient way though.
time_local1 = NaN(a,b,length(subset));
data = NaN(a,b,length(subset));
for a1 = 1:length(subset)
if isempty(time_local == subset(a1)) == 0
ind = find(time_local == subset(a1)); % Hour 1
[x1,y1,z1] = ind2sub(size(time_local),ind);
for a2 = 1:length(x1)
time_local1(x1(a2),y1(a2),a1) = time_local(x1(a2),y1(a2),z1(a2));
data(x1(a2),y1(a2),a1) = data1(x1(a2),y1(a2),z1(a2));
end
end
end
This probably isn't the most efficient one - using loops - but it works
[a,b,~] = size(data1); % Size of data file
first = datenum([num2str(years(y)),'-01-01 00:00:00']); % Hour one
last = datenum([num2str(years(y)),'-12-31 23:00:00']);
dt=1/24;
subset = first:dt:last;
%% Find where hour 1 occurs along the 3rd dimension for every gridcell (matching vector and vector instead of vector and matrix)
% Pre-allocate
time_local1 = NaN(a,b,length(subset));
data = NaN(a,b,length(subset));
count = 1:24:length(data1); % Set increases for beginning of each day
for a1 = 1:a % lon
disp(a1)
for b1 = 1:b % lat
ind(a1,b1) = find(time_local(a1,b1,:) == first); % Location of hour one in the 3rd dimension
ind1 = ind(a1,b1):24:length(data1); % Hours in each day starting from hour one
for c1 = 1:length(ind1)-2
%%% Ran once - Checking if the dates I am averaging across
%%% actually have no missing days
% if squeeze(time_local(a1,b1,ind1(c1):ind1(c1)+23)) ~= subset(count(c1):count(c1)+23)'
% disp([a1,b1])
% end
time_local1(a1,b1,count(c1):count(c1)+23) = time_local(a1,b1,ind1(c1):ind1(c1)+23); % Hourly data
data(a1,b1,c1) = nanmean(data1(a1,b1,ind1(c1):ind1(c1)+23)); % Average over one day
time(a1,b1,c1) = time_local1(a1,b1,count(c1)); % One entry per day
end
end
end

Vectorization instead of nested for loops in matlab

I am having trouble vectorizing this for loop in matlab which is really slow.
tvec and data are N×6 and N×4 arrays respectively, and they are the inputs to the function.
% preallocate
sVec = size(tvec)
tvec_ab = zeros(sVec(1),6);
data_ab = zeros(sVec(1),4);
inc = 0;
for i = 1:12
for j = 1:31
inc = inc +1;
[I,~] = find(tvec(:,3)==i & tvec(:,2)== j,1,'first');
if(I > 0)
tvec_ab(inc,:) = tvec(I,:);
data_ab(inc,:) = sum(data( (tvec(:,3) == j) & (tvec(:,2)==i) ,:));
end
end
end
% set output values
tvec_a = tvec_ab(1:inc,:);
data_a = data_ab(1:inc,:);
Every row in tvec represents the timestamp where the data was taken in the same row in the data matrix. Below you can see how a row would look like:
tvec:
[year, month, day, hour, minute, second]
data:
[dataA, dataB, dataC, dataD]
In the main program we can choose to "aggregate" after month, day or hour.
The code above is an example of how the aggregation for the option 'DAY' could happen.
the first time stamp of the day is the time stamp we want our output tvec_a to have in the row for that day.
The data output for that day (row in this case) would then be the sum of all the data for that day. Example:
data:
[data1ADay1, data1BDay1, data1CDay1, data1DDay1;
data2ADay1, data2BDay1, data2CDay1, data2DDay1]
aggregated data:
[data1ADay1 + data2ADay1, data1BDay1 + data2BDay1, data1CDay1+ data2CDay1,
data1DDay1+data2DDay1]
A vectorized version (not fully tested)
[x y] = meshgrid(1:12,1:31);
XY=[x(:) Y(:)];
[I,loc]=ismember(XY,tvec(:,2:3),'rows');
tvec_ab(I)=tvec(loc(loc>0),:);
acm = accumarray(tvec(:,2:3),data);
data_ab(I) = acm(sub2ind(size(acm),tvec(:,2),tvec(:,3)));
I actually found a way to do it myself:
%J is the indexes of the first unique days ( eg. if there is multiple
%data from january 1., the first time stamp from january 1. will be
%the time samp for our output)
[~,J,K] = unique(tvec(:,2:3),'rows');
%preallocate
tvec_ab = zeros(length(J),6);
data_ab = zeros(length(J),4);
tvec_ab = tvec(J,:);
%sum all data from the same days together column wise.
for i = 1:4
data_ab(:,i) = accumarray(K,data(:,i));
end
%set output
data_a = data_ab;
tvec_a = tvec_ab;
Thanks for your responses though

For command + interpolation: need some tips

I have a matrix A with three columns: daily dates, prices, and hours - all same size vector - there are multiple prices associated to hours in a day.
sample data below:
A_dates = A_hours= A_prices=
[20080902 [9.698 [24.09
20080902 9.891 24.59
200080902 10.251 24.60
20080903 9.584 25.63
200080903 10.45 24.96
200080903 12.12 24.78
200080904 12.95 26.98
20080904 13.569 26.78
20080904] 14.589] 25.41]
Keep in my mind that I have about two years of daily data with about 10 000 prices per day that covers almost every minutes in a day from 9:30am to 16:00pm. Actually my initial dataset time was in milliseconds. I then converted my milliseconds in hours. I have some hours like 14.589 repeated three times with 3 different prices. Hence I did the following:
time=[A_dates,A_hours,A_prices];
[timeinhr,price]=consolidator(time,A_prices,'mean'); where timeinhr is both vector A_dates and A_hours
to take an average price at each say 14.589hours.
then for any missing hours with .25 .50 .75 and integer hours - I wish to interpolate.
For each date, hours repeat and I need to interpolate linearly prices that I don't have for some "wanted" hours. But of course I can't use the command interp1 if my hours repeats in my column because I have multiple days. So say:
%# here I want hours in 0.25unit increments (like 9.5hrs)
new_timeinhr = 0:0.25:max(A_hours));
day_hour = rem(new_timeinhour, 24);
%# Here I want only prices between 9.5hours and 16hours
new_timeinhr( day_hour <= 9.2 | day_hour >= 16.1 ) = [];
I then create a unique vectors of day and want to use a for and if command to interpolate daily and then stack my new prices in a vector one after the other:
days = unique(A_dates);
for j = 1:length(days);
if A_dates == days(j)
int_prices(j) = interp1(A_hours, A_prices, new_timeinhr);
end;
end;
My error is:
In an assignment A(I) = B, the number of elements in B and I must be the same.
How can I write the int_prices(j) to the stack?
I recommend converting your input to a single monotonic time value. Use the MATLAB datenum format, which represents one day as 1. There are plenty of advantages to this: You get the builtin MATLAB time/date functions, you get plot labels formatted nicely as date/time via datetick, and interpolation just works. Without test data, I can't test this code, but here's the general idea.
Based on your new information that dates are stored as 20080902 (I assume yyyymmdd), I've updated the initial conversion code. Also, since the layout of A is causing confusion, I'm going to refer to the columns of A as the vectors A_prices, A_hours, and A_dates.
% This datenum vector matches A. I'm assuming they're already sorted by date and time
At = datenum(num2str(A_dates), 'yyyymmdd') + datenum(0, 0, 0, A_hours, 0, 0);
incr = datenum(0, 0, 0, 0.25, 0, 0); % 0.25 hour
t = (At(1):incr:At(end)).'; % Full timespan of dataset, in 0.25 hour increments
frac_hours = 24*(t - floor(t)); % Fractional hours into the day
t_business_day = t((frac_hours > 9.4) & (frac_hours < 16.1)); % Time vector only where you want it
P = interp1(At, A_prices, t_business_day);
I repeat, since there's no test data, I can't test the code. I highly recommend testing the date conversion code by using datestr to convert back from the datenum to readable dates.
Converting days/hours to serial date numbers, as suggested by #Peter, is definitely the way to go. Based on his code (which I already upvoted), I present below a simple example.
First I start by creating some fake data resembling what you described (with some missing parts as well):
%# three days in increments of 1 hour
dt = datenum(num2str((0:23)','2012-06-01 %02d:00'), 'yyyy-mm-dd HH:MM'); %#'
dt = [dt; dt+1; dt+2];
%# price data corresponding to each hour
p = cumsum(rand(size(dt))-0.5);
%# show plot
plot(dt, p, '.-'), datetick('x')
grid on, xlabel('Date/Time'), ylabel('Prices')
%# lets remove some rows as missing
idx = ( rand(size(dt)) < 0.1 );
hold on, plot(dt(idx), p(idx), 'ro'), hold off
legend({'prices','missing'})
dt(idx) = [];
p(idx) = [];
%# matrix same as yours: days,prices,hours
ymd = str2double( cellstr(datestr(dt,'yyyymmdd')) );
hr = str2double( cellstr(datestr(dt,'HH')) );
A = [ymd p hr];
%# let clear all variables except the data matrix A
clearvars -except A
Next we interpolate the price data across the entire range in 15 minutes increments:
%# convert days/hours to serial date number
dt = datenum(num2str(A(:,[1 3]),'%d %d'), 'yyyymmdd HH');
%# create a vector of 15 min increments
t_15min = (0:0.25:(24-0.25))'; %#'
tt = datenum(0,0,0, t_15min,0,0);
%# offset serial date across all days
ymd = datenum(num2str(unique(A(:,1))), 'yyyymmdd');
tt = bsxfun(#plus, ymd', tt); %#'
tt = tt(:);
%# interpolate data at new datetimes
pp = interp1(dt, A(:,2), tt);
%# extract desired period of time from each day
idx = (9.5 <= t_15min & t_15min <= 16);
idx2 = bsxfun(#plus, find(idx), (0:numel(ymd)-1)*numel(t_15min));
P = pp(idx2(:));
%# plot interpolated data, and show extracted periods
figure, plot(tt, pp, '.-'), datetick('x'), hold on
plot([tt(idx2);nan(1,numel(ymd))], [pp(idx2);nan(1,numel(ymd))], 'r.-')
hold off, grid on, xlabel('Date/Time'), ylabel('Prices')
legend({'interpolated prices','period of 9:30 - 16:00'})
and here are the two plots showing the original and interpolated data:
I think I might have solved it this way:
new_timeinhr = 0:0.25:max(A(:,2));
day_hour = rem(new_timeinhr, 24);
new_timeinhr( day_hour <= 9.4 | day_hour >= 16.1 ) = [];
days=unique(data(:,1));
P=[];
for j=1:length(days);
condition=A(:,1)==days(j);
intprices = interp1(A(condition,2), A(condition,3), new_timeinhr);
P=vertcat(P,intprices');
end;

correlation coefficient between cells

I have a dataset stored in a similar manner to the follwing example:
clear all
Year = cell(1,4);
Year{1} = {'Y2007','Y2008','Y2009','Y2010','Y2011'};
Year{2} = {'Y2005','Y2006','Y2007','Y2008','Y2009'};
Year{3} = {'Y2009','Y2010','Y2011'};
Year{4} = {'Y2007','Y2008','Y2009','Y2010','Y2011'};
data = cell(1,4);
data{1} = {rand(26,1),rand(26,1),rand(26,1),rand(26,1),rand(26,1)};
data{2} = {rand(26,1),rand(26,1),rand(26,1),rand(26,1),rand(26,1)};
data{3} = {rand(26,1),rand(26,1),rand(26,1)};
data{4} = {rand(26,1),rand(26,1),rand(26,1),rand(26,1),rand(26,1)};
Where each cell in 'Year' represents the time where each measurement in 'data' was collected. For example, the first cell in Year ('Year{1}') contains the year where each measurements in 'data{1}' was collected so that data{1}{1} was collected in 'Y2007', data{1}{2} in 'Y2008'...and so on
I am now trying to find the correlation between each measurement with the corresponding (same year) measurement from the other locations. For example for the year 'Y2007' I would like to find the correlation between data{1}{1} and data{2}{3}, then data{1}{1} and data{4}{1}, and then data{2}{3} and data{4}{1} and so on for the remaining years.
I know that the corrcoef command should be used to calculate the correlation, but I cannot seem to get to the stage where this is possible. Any advice would be much appreciated.
I assume one year appears only once per cell. Here is a code I end up with (see comments for explanations):
yu = unique([Year{:}]); %# cell array of unique year across all cells
cc = cell(size(yu)); %# cell array for each year
for y = 1:numel(yu)
%# which cells have y-th year
yuidx = cellfun(#(x) find(ismember(x,yu{y})), Year, 'UniformOutput',0);
yidx = find(cellfun(#(x) ~isempty(x), yuidx, 'UniformOutput',1));
if numel(yidx) <= 1
continue
end
%# find indices for y-th year in each cell
yidx2 = cell2mat(yuidx(yidx));
%# fill matrix to calculate correlation
ydata = zeros(26,numel(yidx));
for k = 1:numel(yidx)
ydata(:,k) = data{yidx(k)}{yidx2(k)};
end
%# calculate correlation coefficients
cc{y} = corr(ydata);
end
yu will have list of all years. cc will contain correlation matrices for each year. If you want you can also keep yidx (if you make it a cell array changing the code accordingly).