I have two big sparse double matrices in Matlab:
P with dimension 1048576 x 524288
I with dimension 1048576 x 524288
I want to find the number of entrances i,j such that P(i,j)<=I(i,j)
Naively I tried to run
n=sum(sum(P<=I));
but it is extremely slow (I had to shut down Matlab because it was running since forever and I wasn't able to stop it).
Is there any other more efficient way to proceed or what I want to do is unfeasible?
From some simple tests,
n = numel(P) - nnz(P>I);
seems to be faster than sum(sum(P<=I)) or even nnz(P<=I). The reason is probably that the sparse matrix P<=I has many more nonzero entries than P>I, and thus requires more memory.
Example:
>> P = sprand(10485, 52420, 1e-3);
>> I = sprand(10485, 52420, 1e-3);
>> tic, disp(sum(sum(P<=I))); toc
(1,1) 549074582
Elapsed time is 3.529121 seconds.
>> tic, disp(nnz(P<=I)); toc
549074582
Elapsed time is 3.538129 seconds.
>> tic, disp(nnz(P<=I)); toc
549074582
Elapsed time is 3.499927 seconds.
>> tic, disp(numel(P) - nnz(P>I)); toc
549074582
Elapsed time is 0.010624 seconds.
Of course this highly depends on the matrix sizes and density.
Here is a solution using indices of nonzero elements:
xp = find(P);
xi = find(I);
vp = nonzeros(P);
vi = nonzeros(I);
[s,ia,ib] = intersect(xp,xi);
iia = true(numel(vp),1);
iia(ia)=false;
iib = true(numel(vi),1);
iib(ib) = false;
n = sum(vp(ia) <= vi(ib))+sum(vp(iia)<0)+sum(vi(iib)>0)-(numel(xp)+numel(xi)-numel(s))+numel(P);
Related
I have a optimization problem in Matlab. Assume I get the following three vectors as input:
A of size (1 x N) (time-series of signal amplitude)
F of size (1 x N) (time-series of signal instantaneous frequency)
fx of size (M x 1) (frequency axis that I want to match the above on)
Now, the elements of F might not (99% of the times they will not) necessarily match the items of fx exactly, which is why I have to match to the closest frequency.
Here's the catch: We are talking about big data. N can easily be up to 2 million, and this has to be run hundred times on several hundred subjects. My two concerns:
Time (main concern)
Memory (production will be run on machines with +16GB memory, but development is on a machine with only 8GB of memory)
I have these two working solutions. For the following, N=2604000 and M=201:
Method 1 (for-loop)
Simple for-loop. Memory is no problem at all, but it is time consuming. Easiest implementation.
tic;
I = zeros(M,N);
for i = 1:N
[~,f] = min(abs(fx-F(i)));
I(f,i) = A(i).^2;
end
toc;
Duration: 18.082 seconds.
Method 2 (vectorized)
The idea is to match the frequency axis with each instantaneous frequency, to get the id.
F
[ 0.9 0.2 2.3 1.4 ] N
[ 0 ][ 0 1 0 0 ]
fx [ 1 ][ 1 0 0 1 ]
[ 2 ][ 0 0 1 0 ]
M
And then multiply each column with the amplitude at that time.
tic;
m_Ff = repmat(F,M,1);
m_fF = repmat(fx,1,N);
[~,idx] = min(abs(m_Ff - m_fF)); clearvars m_Ff m_fF;
m_if = repmat(idx,M,1); clearvars idx;
m_fi = repmat((1:M)',1,N);
I = double(m_if==m_fi); clearvars m_if m_fi;
I = bsxfun(#times,I,A);
toc;
Duration: 64.223 seconds. This is surprising to me, but probably because the huge variable sizes and my limited memory forces it to store the variables as files. I have SSD, though.
The only thing I have not taken advantage of, is that the matrices will have many zero-elements. I will try and look into sparse matrices.
I need at least single precision for both the amplitudes and frequencies, but really I found that it takes a lot of time to convert from double to single.
Any suggestions on how to improve?
UPDATE
As of the suggestions, I am now down to a time of combined 2.53 seconds. This takes advantage of the fact that fx is monotonically increasing and even-spaced (always starting in 0). Here is the code:
tic; df = mode(diff(fx)); toc; % Find fx step size
tic; idx = round(F./df+1); doc; % Convert to bin ids
tic; I = zeros(M,N); toc; % Pre-allocate output
tic; lin_idx = idx + (0:N-1)*M; toc; % Find indices to insert at
tic; I(lin_idx) = A.^2; toc; % Insert
The timing outputs are the following:
Elapsed time is 0.000935 seconds.
Elapsed time is 0.021878 seconds.
Elapsed time is 0.175729 seconds.
Elapsed time is 0.018815 seconds.
Elapsed time is 2.294869 seconds.
Hence the most time-consuming step is now the very final one. Any advice on this is greatly appreciated. Thanks to #Peter and #Divakar for getting me this far.
UPDATE 2 (Solution)
Wuhuu. Using sparse(i,j,k) really improves the outcome;
tic; df = fx(2)-fx(1); toc;
tic; idx = round(F./df+1); toc;
tic; I = sparse(idx,1:N,A.^2); toc;
With timings:
Elapsed time is 0.000006 seconds.
Elapsed time is 0.016213 seconds.
Elapsed time is 0.114768 seconds.
Here's one approach based on bsxfun -
abs_diff = abs(bsxfun(#minus,fx,F));
[~,idx] = min(abs_diff,[],1);
IOut = zeros(M,N);
lin_idx = idx + [0:N-1]*M;
IOut(lin_idx) = A.^2;
I'm not following entirely the relationship of F and fx, but it sounds like fx might be a set of bins of frequency, and you want to find the appropriate bin for each input F.
Optimizing this depends on the characteristics of fx.
If fx is monotonic and evenly spaced, then you don't need to search it at all. You just need to scale and offset F to align the scales, then round to get the bin number.
If fx is monotonic (sorted) but not evenly spaced, you want histc. This will use an efficient search on the edges of fx to find the correct bin. You probably need to transform f first so that it contains the edges of the bins rather than the centers.
If it's neither, then you should be able to at least sort it to get it monotonic, storing the sort order, and restoring the original order once you've found the correct "bin".
How to randomly pick up N numbers from a vector a with weight assigned to each number?
Let's say:
a = 1:3; % possible numbers
weight = [0.3 0.1 0.2]; % corresponding weights
In this case probability to pick up 1 should be 3 times higher than to pick up 2.
Sum of all weights can be anything.
R = randsample([1 2 3], N, true, [0.3 0.1 0.2])
randsample is included in the Statistics Toolbox
Otherwise you can use some kind of roulette-wheel selection process. See this similar question (although not MATLAB specific). Here's my one-line implementation:
a = 1:3; %# possible numbers
w = [0.3 0.1 0.2]; %# corresponding weights
N = 10; %# how many numbers to generate
R = a( sum( bsxfun(#ge, rand(N,1), cumsum(w./sum(w))), 2) + 1 )
Explanation:
Consider the interval [0,1]. We assign for each element in the list (1:3) a sub-interval of length proportionate to the weight of each element; therefore 1 get and interval of length 0.3/(0.3+0.1+0.2), same for the others.
Now if we generate a random number with uniform distribution over [0,1], then any number in [0,1] has an equal probability of being picked, thus the sub-intervals' lengths determine the probability of the random number falling in each interval.
This matches what I'm doing above: pick a number X~U[0,1] (more like N numbers), then find which interval it falls into in a vectorized way..
You can check the results of the two techniques above by generating a large enough sequence N=1000:
>> tabulate( R )
Value Count Percent
1 511 51.10%
2 160 16.00%
3 329 32.90%
which more or less match the normalized weights w./sum(w) [0.5 0.16667 0.33333]
amro gives a nice answer (that I rated up), but it will be highly intensive if you wish to generate many numbers from a large set. This is because the bsxfun operation can generate a huge array, which is then summed. For example, suppose I had a set of 10000 values to sample from, all with different weights? Now, generate 1000000 numbers from that sample.
This will take some work to do, since it will generate a 10000x1000000 array internally, with 10^10 elements in it. It will be a logical array, but even so, 10 gigabytes of ram must be allocated.
A better solution is to use histc. Thus...
a = 1:3
w = [.3 .1 .2];
N = 10;
[~,R] = histc(rand(1,N),cumsum([0;w(:)./sum(w)]));
R = a(R)
R =
1 1 1 2 2 1 3 1 1 1
However, for a large problem of the size I suggested above, it is fast.
a = 1:10000;
w = rand(1,10000);
N = 1000000;
tic
[~,R] = histc(rand(1,N),cumsum([0;w(:)./sum(w)]));
R = a(R);
toc
Elapsed time is 0.120879 seconds.
Admittedly, my version takes 2 lines to write. The indexing operation must happen on a second line since it uses the second output of histc. Also note that I've used the ability of the new matlab release, with the tilde (~) operator as the first argument of histc. This causes that first argument to be immediately dumped in the bit bucket.
TL;DR
For maximum performance, if you only need a singe sample, use
R = a( sum( (rand(1) >= cumsum(w./sum(w)))) + 1 );
and if you need multiple samples, use
[~, R] = histc(rand(N,1),cumsum([0;w(:)./sum(w)]));
Avoid randsample. Generating multiple samples upfront is three orders of magnitude faster than generating individual values.
Performance metrics
Since this showed up near the top of my Google search, I just wanted to add some performance metrics to show that the right solution will depend very much on the value of N and the requirements of the application. Also that changing the design of the application can dramatically increase performance.
For large N, or indeed N > 1:
a = 1:3; % possible numbers
w = [0.3 0.1 0.2]; % corresponding weights
N = 100000000; % number of values to generate
w_normalized = w / sum(w) % normalised weights, for indication
fprintf('randsample:\n');
tic
R = randsample(a, N, true, w);
toc
tabulate(R)
fprintf('bsxfun:\n');
tic
R = a( sum( bsxfun(#ge, rand(N,1), cumsum(w./sum(w))), 2) + 1 );
toc
tabulate(R)
fprintf('histc:\n');
tic
[~, R] = histc(rand(N,1),cumsum([0;w(:)./sum(w)]));
toc
tabulate(R)
Results:
w_normalized =
0.5000 0.1667 0.3333
randsample:
Elapsed time is 2.976893 seconds.
Value Count Percent
1 49997864 50.00%
2 16670394 16.67%
3 33331742 33.33%
bsxfun:
Elapsed time is 2.712315 seconds.
Value Count Percent
1 49996820 50.00%
2 16665005 16.67%
3 33338175 33.34%
histc:
Elapsed time is 2.078809 seconds.
Value Count Percent
1 50004044 50.00%
2 16665508 16.67%
3 33330448 33.33%
In this case, histc is fastest
However, in the case where maybe it is not possible to generate all N values up front, perhaps because the weights are updated on each iterations, i.e. N=1:
a = 1:3; % possible numbers
w = [0.3 0.1 0.2]; % corresponding weights
I = 100000; % number of values to generate
w_normalized = w / sum(w) % normalised weights, for indication
R=zeros(N,1);
fprintf('randsample:\n');
tic
for i=1:I
R(i) = randsample(a, 1, true, w);
end
toc
tabulate(R)
fprintf('cumsum:\n');
tic
for i=1:I
R(i) = a( sum( (rand(1) >= cumsum(w./sum(w)))) + 1 );
end
toc
tabulate(R)
fprintf('histc:\n');
tic
for i=1:I
[~, R(i)] = histc(rand(1),cumsum([0;w(:)./sum(w)]));
end
toc
tabulate(R)
Results:
0.5000 0.1667 0.3333
randsample:
Elapsed time is 3.526473 seconds.
Value Count Percent
1 50437 50.44%
2 16149 16.15%
3 33414 33.41%
cumsum:
Elapsed time is 0.473207 seconds.
Value Count Percent
1 50018 50.02%
2 16748 16.75%
3 33234 33.23%
histc:
Elapsed time is 1.046981 seconds.
Value Count Percent
1 50134 50.13%
2 16684 16.68%
3 33182 33.18%
In this case, the custom cumsum approach (based on the bsxfun version) is fastest.
In any case, randsample certainly looks like a bad choice all round. It also goes to show that if an algorithm can be arranged to generate all random variables upfront then it will perform much better (note that there are three orders of magnitude less values generated in the N=1 case in a similar execution time).
Code is available here.
Amro has a really nice answer for this topic. However, one might want a super-fast implementation to sample from huge PDFs where the domain might contain several thousands. For such scenarios, it might be tedious to use bsxfun and cumsum very frequently. Motivated from Gnovice's answer, it would make sense to implement roulette wheel algorithm with a run length encoding schema. I performed a benchmark with Amro's solution and new code:
%% Toy example: generate random numbers from an arbitrary PDF
a = 1:3; %# domain of PDF
w = [0.3 0.1 0.2]; %# Probability Values (Weights)
N = 10000; %# Number of random generations
%Generate using roulette wheel + run length encoding
factor = 1 / min(w); %Compute min factor to assign 1 bin to min(PDF)
intW = int32(w * factor); %Get replicator indexes for run length encoding
idxArr = zeros(1,sum(intW)); %Create index access array
idxArr([1 cumsum(intW(1:end-1))+1]) = 1;%Tag sample change indexes
sampTable = a(cumsum(idxArr)); %Create lookup table filled with samples
len = size(sampTable,2);
tic;
R = sampTable( uint32(randi([1 len],N,1)) );
toc;
tabulate(R);
Some evaluations of the code above for very large data where domain of PDF contain huge length.
a ~ 15000, n = 10000
Without table: Elapsed time is 0.006203 seconds.
With table: Elapsed time is 0.003308 seconds.
ByteSize(sampTable) 796.23 kb
a ~ 15000, n = 100000
Without table: Elapsed time is 0.003510 seconds.
With table: Elapsed time is 0.002823 seconds.
a ~ 35000, n = 10000
Without table: Elapsed time is 0.226990 seconds.
With table: Elapsed time is 0.001328 seconds.
ByteSize(sampTable) 2.79 Mb
a ~ 35000 n = 100000
Without table: Elapsed time is 2.784713 seconds.
With table: Elapsed time is 0.003452 seconds.
a ~ 35000 n = 1000000
Without table: bsxfun: out of memory
With table : Elapsed time is 0.021093 seconds.
The idea is to create a run length encoding table where frequent values of the PDF are replicated more compared to non-frequent values. At the end of the day, we sample an index for weighted sample table, using uniform distribution, and use corresponding value.
It is memory intensive, but with this approach it is even possible to scale up to PDF lengths of hundred thousands. Hence access is super-fast.
I have a lot of 2-by-2 matrices S1, S2, ..., SN, and on each of those matrices, I want to perform a left and right matrix multiplication as in R*S*R^T, where R is also a 2-by-2 matrix. Obviously I could just write this with a for loop, but I anticipate it being very slow for large N in MATLAB. Is there a simple and efficient way to accomplish this without using a for loop? Thanks in Advance!
Your biggest problem is not the loops. For matrices so small calling MATLABs A*B introduces a lot of overhead. The best thing you can do is to store all the matrices in a large 4 x n_matrices matrix and spell out the matrix multiplications manually:
A = rand(4, 1000);
B = rand(4, 1000);
tic;
C = zeros(size(A));
C(1,:) = A(1,:).*B(1,:) + A(3,:).*B(2,:);
C(2,:) = A(2,:).*B(1,:) + A(4,:).*B(2,:);
C(3,:) = A(1,:).*B(3,:) + A(3,:).*B(4,:);
C(4,:) = A(2,:).*B(3,:) + A(4,:).*B(4,:);
toc
Elapsed time is 0.020950 seconds.
As you see, this takes little time (this is a 6-years old desktop PC). For small matrices like this it is practical and I can not imagine anything else written in MATLAB that could beat this performance-wise. Well, for very large number of 2x2 matrices you could introduce blocking (i.e., handle only a number of matrices at a time) to enhance cache reuse.
I would say that the cycle here is not that bad and not that slow, consider this
N = 1000000
S = cell(1,N);
Out = S;
A = rand(2);
B = rand(2);
for i = 1 : N
S{i} = rand(2);
end
tic
for i = 1 : N
Out{i} = A * S{i} * B;
end
toc
tic
f = #(i) A*i*B;
Out = cellfun(f,S,'UniformOutput' , false);
toc
N =
1000000
Elapsed time is 2.609569 seconds.
Elapsed time is 9.871200 seconds.
You may think of performing a cat of your 2x2 matrices and then performing just 2 multiplications (transposing correctly on the way). But you will loose time in catting.
I have an X matrix of dimensions 37,000,000 by 22, and I want to compute the correlation matrix of X.
I.e.,
X_corr = corr(X,'type','Spearman');
And I'd like the size of X_corr to be of 22 by 22.
But it takes forever, is there anyway to compute the correlation matrix faster for such long matrices?
Thanks!
Inspired by #Bitwise's solution, I looked into the implementation of corr. (You can do so by simply typing edit corr. It has a loop over pairs of variables since it wants to deal with NaN's. If you don't have NaN's in your data, you can compute Spearman's correlation simply as:
X = rand(3e6, 22);
R = tiedrank(X); % Elapsed time is 8.956700 seconds.
C = corrcoef(X); % Elapsed time is 0.579448 seconds.
which should be same as
C2 = corr(X, 'type', 'Spearman'); Elapsed time is 9.501480 seconds.
But it's about the same speed.
Try corrcoef():
>> X=rand(1000000,22);
>> tic;corr(X);toc
Elapsed time is 18.320141 seconds.
>> tic;corrcoef(X);toc
Elapsed time is 0.494406 seconds
Also this is almost what you want (I don't have enough memory for 37e6x22):
>> X=rand(10000000,22);
>> tic;corrcoef(X);toc
Elapsed time is 7.620509 seconds.
Edit:
If you want Spearman, you can convert to ranks and then calculate Pearson, which is equivalent. Sorting isn't that bad:
>> X=rand(10000000,22);
>> tic;sort(X);toc
Elapsed time is 31.639637 seconds.
Is there a way to efficiently compare two matrices, I was thinking something like
same = abs((A-B)) = 0...
substracting values of one matrix to the other and if they result is 0, they are the same, also there is a isequal() function, What would be the best to compare both matrices?
You can simply do isequal(A,B) and it will return 1 if true or 0 if false.
Since you're dealing with floating point, you probably don't want to test for exact equality (depending on your application). Thus, you can just check that
norm(A - B)
is sufficiently small, say < 1e-9, again depending on your application. This is the matrix 2-norm, which will be near zero if A - B is the all zeros matrix or nearly so.
It seems that ISEQUAL is faster than checking for non-zero elements after subtraction:
>> a = rand(100, 100);
>> b = a;
>> tic; for ii = 1:100000; any(any(a - b)); end; toc;
Elapsed time is 2.089838 seconds.
>> tic; for ii = 1:100000; isequal(a, b); end; toc;
Elapsed time is 1.201815 seconds.