This is a simple question, I can't see a better answer and maybe someone else can! Here is the code:
Example variables
nSim = 3000;
nRow = 10000;
data = zeros(1, 5, nRow);
data (:, 1:4, :) = rand(4, nRow)*0.5; % 4 columns of duration values
data (:, 5, :) = 1000; % 1 column of actual value
basis.increaseRate = 1 + (rand(nSim, 4)*0.1);
Example calculation
dataWithSim = repmat(data(:, 1:4,:),nSim, 1, 1);
increaseFactors = bsxfun(#power, basis.increaseRate, dataWithSim);
Values = bsxfun(#times, data(:,5,:), prod(increaseFactors,2));
The need to repmat feels wrong, but I can't see a way to avoid it.
effectively I'm doing increase^data and I really didn't want to have to loop through the two dimensions (sims or data rows). The dummy data can be ordered any way I choose, but the values output needs to be an nSim by nRow matrix.
Any ideas welcome. Thanks.
You don't really need to use that repmat. You can directly feed that "submatrix" from data like so -
increaseFactors = bsxfun(#power, basis.increaseRate, data(:,1:4,:));
bsxfun internally takes care of the expansion of the singleton dimensions, which is the first dimension (rows) of data in this case. Since basis.increaseRate has nSim rows and data(:,1,4,:) has one row, it would be expanded to have the same number of rows, i.e. nSim and thus does the job of repmat-ing/expanding internally.
Rest of the code stays the same.
Related
I have the following calculations in two steps:
Initially, I create a set of 4 grid vectors, each spanning from -2 to 2:
u11grid=[-2:0.1:2];
u12grid=[-2:0.1:2];
u22grid=[-2:0.1:2];
u21grid=[-2:0.1:2];
[ca, cb, cc, cd] = ndgrid(u11grid, u12grid, u22grid, u21grid);
u11grid=ca(:);
u12grid=cb(:);
u22grid=cc(:);
u21grid=cd(:);
%grid=[u11grid u12grid u22grid u21grid]
sg=size(u11grid,1);
Next, I have an algorithm assigning the same index (equalorder) to the rows of grid sharing a specific structure:
U1grid=[-u11grid -u21grid -u12grid -u22grid Inf*ones(sg,1) -Inf*ones(sg,1)];
U2grid=[u21grid-u11grid -u21grid u22grid-u12grid -u22grid Inf*ones(sg,1) -Inf*ones(sg,1)];
s1=size(U1grid,2);
s2=size(U2grid,2);
%-------------------------------------------------------
%sortedU1grid gives U1grid with each row sorted from smallest to largest
%for each row i of sortedU1grid and for j=1,2,...,s1 index1(i,j) gives
%the column position 1,2,...,s1 in U1grid(i,:) of sortedU1grid(i,j)
[sortedU1grid,index1] = sort(U1grid,2);
%for each row i of sortedU1grid, d1(i,:) is a 1x(s1-1) row of ones and zeros
% d1(i,j)=1 if sortedU1grid(i,j)-sortedU1grid(i,j-1)=0 and d1(i,j)=0 otherwise
d1 = diff(sortedU1grid,[],2) == 0;
%-------------------------------------------------------
%Repeat for U2grid
[sortedU2grid,index2] = sort(U2grid,2);
d2 = diff(sortedU2grid,[],2) == 0;
%-------------------------------------------------------
%Assign the same index to the rows of grid sharing the same "ordering"
[~,~,equalorder] = unique([index1 index2 d1 d2],'rows', 'stable'); %sgx1
My question: is there a way to compute the algorithm in step 2 without the initial construction of the grid vectors in step 1? I am asking this because step 1 takes a lot of memory given that it basically generates the Cartesian product of 4 sets.
A solution should not rely on the specific content of U1grid and U2grid as that part changes in my actual code. To be more clear: U1grid and U2grid are ALWAYS derived from u11grid, ..., u21grid; however, the way in which they are derived from u11grid, ..., u21grid is slightly more complicated in my actual code from what I have reported here.
As Cris Luengo mentions in a comment, you're always going to be dealing with a trade-off between speed and memory. That said, one option you have is to only compute each of your 4 grid variables (u11grid u12grid u22grid u21grid) when needed instead of computing them once and storing them. You will save on memory but will lose speed if you are recomputing each one multiple times.
The solution I came up with involves creating an anonymous function equivalent for each of the 4 grid variables, using combinations of repmat and repelem to compute each individually instead of ndgrid to compute them all together:
u11gridFcn = #() repmat((-2:0.1:2).', 41.^3, 1);
u12gridFcn = #() repmat(repelem((-2:0.1:2).', 41), 41.^2, 1);
u22gridFcn = #() repmat(repelem((-2:0.1:2).', 41.^2), 41, 1);
u21gridFcn = #() repelem((-2:0.1:2).', 41.^3);
sg = 41.^4;
You would then use these by replacing every usage of your 4 grid variables in U1grid and U2grid with their corresponding function call. For your specific example above, this would be the new code for U1grid and U2grid (note also the use of inf(...) instead of Inf*ones(...), a small detail):
U1grid = [-u11gridFcn() ...
-u21gridFcn() ...
-u12gridFcn() ...
-u22gridFcn() ...
inf(sg, 1) ...
-inf(sg, 1)];
U2grid = [u21gridFcn()-u11gridFcn() ...
-u21gridFcn() ...
u22gridFcn()-u12gridFcn() ...
-u22gridFcn() ...
inf(sg, 1) ...
-inf(sg, 1)];
In this example, you avoid the memory needed to store the 4 grid variables, but the values for u11grid and u12grid will each be computed twice while the values for u21grid and u22grid will each be computed three times. Likely a small time trade-off for a potentially significant memory savings.
You may be able to remove the ndgrid, but it is not the memory bottleneck of this code, which is the call to unique on the large matrix A = [index1 index2 d1 d2]. The size of A is 2825761 by 22 (much larger than the grids), and it seems that unique may even internally copy A. I was able to avoid this call using
[sorted, ind] = sortrows([index1 index2 d1 d2]);
change = [1; any(diff(sorted), 2)];
uniqueInd = cumsum(change);
equalorder(ind) = uniqueInd;
[~, ~, equalorder] = unique(equalorder, 'stable');
where the last line is still the memory bottleneck and is only needed if you want the same numbering as your code produces. If any unique ordering is okay, you can skip it. You may be able to further reduce the memory footprint by carefully clearing variables are soon as they are no longer needed.
I have two very large matrices (228453x460) and I want to compute correlation between rows.
for i=1:228453
if(vec1_preprocess(i,1))
for j=1:228453
df = effdf(vec1_preprocess(i,:)',vec2_preprocess(j,:)');
corr_temp = corr(vec1_preprocess(i,:)', vec2_preprocess(j,:)');
p = calculate_p(corr_temp, df);
temp = (meanVec(i)+p)/2;
meanVec(i) = temp;
end
disp(i);
end
end
This takes ~1day. Is there a direct way to compute this?
Edit: Code for effdf
function df = effdf(ts1,ts2);
%function df = effdf(ts1,ts2);
ts1=ts1-mean(ts1);
ts2=ts2-mean(ts2);
N=length(ts1);
ac1=xcorr(ts1);
ac1=ac1/max(ac1); % normalized autocorrelation
ac1=ac1(((length(ac1)+3)/2):((length(ac1)+3)/2+floor(N/4)));
ac2=xcorr(ts2);
ac2=ac2/max(ac2); % normalized autocorrelation
ac2=ac2(((length(ac2)+3)/2):((length(ac2)+3)/2+floor(N/4)));
df = 1/((1/N)+(2/N)*sum(((N-(1:length(ac1)))/N)'.*ac1.*ac2));
Since you didn't post the code, I assume that your custom functions calculate_p and effdf are perfectly optimized and don't represent the bottleneck of your script. Let's focus on what we have.
The first problem I see is:
if (vec1_preprocess(i,1))
A check over 228453 iterations can sensibly increase the running time. Hence, extract only the matrix rows that don't contain a 0 in the first column and perform your calculations on those:
idx = vec1_preprocess(:,1) ~= 0;
vec1_preprocess = vec1_preprocess(idx,:);
for i = 1:size(vec1_preprocess,1)
% ...
end
The second problem is corr. It seems like you are computing p-values also, using calculate_p. Why don't you use the buil-in p-values returned by the function as second output argument?
[c,p] = corr(A,B);
Alternatively, if Pearson's correlation is what you are looking for, you could replace corr with corrcoef to see if it produces a better performance.
Last but not least (in fact it's the most important thing): is there any reason why you are performing this computation row by row instead of running it on the whole matrices?
If you read the documentation, you'll see that corr computes the correlation between columns, not rows.
To convert rows into columns and columns into rows, simply transpose the matrix:
tmp1 = vec1_preprocess';
tmp2 = vec2_preprocess';
C = corr(tmp1,tmp2);
Suppose I have a long data vector y, plus some indices into it. I want to extract a short snippet or window around every index.
For example, suppose I want to construct a matrix containing 64 samples before and 64 samples after every value that is below three. This is trivial to do in a for-loop:
WIN_SIZE = 64;
% Sample data with padding
data = [nan(WIN_SIZE,1); randn(1e6,1); nan(WIN_SIZE,1)];
% Sample events, could be anything
index = find(data < 3);
snippets = nan(length(index), 2*WIN_SIZE + 1);
for ii=1:length(index)
snippets(ii,:) = data((index(ii)-WIN_SIZE):(index(ii)+WIN_SIZE));
end
However,this is not blazingly fast. Is there any way to vectorize (or otherwise speed up) this operation?
(In case this is unclear, the index could be anything and may not necessarily be a property of the data; I just wanted something simple to illustrate the idea.)
Use bsxfun -
snippets = data(bsxfun(#plus,index(:),[-WIN_SIZE:WIN_SIZE]))
yesterday I implemented my first bootstrap in MATLab. (and yes, I know, for loops are evil.):
%data is an mxn matrix where the data should be sampled per column but there
can be a NaNs Elements
%from the array (a column of data) n values are sampled nReps times
function result = bootstrap_std(data, n, nReps,quantil)
result = zeros(1,size(data,2));
for i=1:size(data,2)
bootstrap_data = zeros(n,nReps);
values = find(~isnan(data(:,i)));
if isempty(values)
bootstrap_data(:,:) = NaN;
else
for k=1:nReps
bootstrap_data(:,k) = datasample(data(values,i),n);
end
end
stat = zeros(1,nReps);
for k=1:nReps
stat(k) = nanstd(bootstrap_data(:,k));
end
sort(stat);
result(i) = quantile(stat,quantil);
end
end
As one can see, this version works columnwise. The algorithm does what it should but is really slow when the data size increaes. My question is now: Is it possible to implement this logic without using for loops? My problem is here that I could not find a version of datasample which does the sampling columnwise. Or is there a better function to use?
I am happy for any hint or idea how I can speed up this implementation.
Thanks and best regards!
stephan
The bottlenecks in your implementation are
The function spends a lot of time inside nanstd which is unnecessary since you exclude NaN values from your sample anyway.
There are a lot of functions that operate column-wise, but you spend time looping over the columns and calling them many times.
You make many calls to datasample which is a relatively slow function. It's much faster to create a random vector of indices using randi and use that instead.
Here's how I would write the function (actually I probably wouldn't put in this many comments, and I wouldn't use so many temp variables, but I'm doing it now so you can see what all the steps of the computation are).
function result = bootstrap_std_new(data, n, nRep, quantil)
result = zeros(1, size(data,2));
for i = 1:size(data,2)
isbad = isnan(data(:,i)); %// Vector of NaN values
if all(isbad)
result(i) = NaN;
else
data0 = data(~isbad, i); %// Temp copy of this column for indexing
index = randi(size(data0,1), n, nRep); %// Create the indexing vector
bootstrapdata = data0(index); %// Sample the data
stdevs = std(bootstrapdata); %// Stdev of sampled data
result(i) = quantile(stdevs, quantil); %// Find the correct quantile
end
end
end
Here are some timings
>> data = randn(100,10);
>> data(randi(1000, 50, 1)) = NaN;
>> tic, bootstrap_std(data, 50, 1000, 0.5); toc
Elapsed time is 1.359529 seconds.
>> tic, bootstrap_std_new(data, 50, 1000, 0.5); toc
Elapsed time is 0.038558 seconds.
So this gives you about a 35x speedup.
Your main issue seems to be that you may have varying numbers/positions of NaN in each column, so can't work on the full matrix unless you're okay with also sampling NaNs. However, some of the inner loops could be simplified.
for k=1:nReps
bootstrap_data(:,k) = datasample(data(values,i),n);
end
Since you're sampling with replacement, you should be able to just do:
bootstrap_data = datasample(data(values,i), n*nReps);
bootstrap_data = reshape(bootstrap_data, [n nReps]);
Also nanstd can work on a full matrix so no need to loop:
stat = nanstd(bootstrap_data); % or nanstd(x,0,2) to change dimension
It would also be worth just looking over your code with profile to see where the bottlenecks are.
I have a random column matrix:
r = rand(1,300)';
I want to re-order it so that instead of having elements in the order of 1,2,3,...,300
I will have elements 1,11,21,31,...,291,2,12,22,32,...,292,3,13,33,...293,...,300.
In other words, I want to take every 10th value, beginning with 1 and put them in that order, then do the same for 2 with every 10th value. I know one way to do this is:
n = 10;
r = [r(1:n:numel(r)); r(2:n:numel(r)); r(3:n:numel(r));...;r(10:n:numel(r))]; % Skipped 4-9 in this example
But obviously, this is very cumbersome to do more than a couple of times. Is there something more efficient?
A loop should be easy, but I am not doing it correctly, it seems (I can see why this might not work, but I can't correct it).
(Here is what I tried:)
n = 10;
for i = 1:10
a = [r(i:n:numel(r))];
end
Any suggestions or help is greatly appreciated.
You can do it like this:
r = reshape(reshape(r, 10, 30)', 300, 1)
EDIT:
As pointed out by #LuisMendo on the comments, it's safer to use .' than ' to transpose the matrix, because if the matrix is complex, that could introduce a complex conjugation. Then, it would be safer to do it like this:
r = reshape(reshape(r, 10, 30).', 300, 1)
You could reshape it into 30x10 matrix, transpose, and take the flat index:
A = 1:300;
A = reshape(A,30,10);
A = A';
A = A(:);
Try this -
intv = 10; %%// Interval after which you intend to get the values consecutively
out = r(reshape(reshape(1:numel(r),intv,[])',1,[]))
Some of the other solutions posted are more efficient, but your idea was a good one. It requires a simple fix to work:
N = numel(r);
M = N/10;
a=[];
for ii = 1:M
a= [a r(ii:10:N)];
end
Hope this helps