I want to find a faster code than using P = nchoosek(1:100,i), which is located in a loop, and repeated i times in my code.
nchoosek(1:100,10) is absolutely vast, far bigger than any typical machine could hold in memory.
The MATLAB documentation for nchoosek says
C = nchoosek(v,k) is only practical for situations where length(v) is less than about 15.
You're not really going to be able to do this.
I found that VChoosek(v,k) is much faster than nchoosek.
Related
Say I have many (around 1000) large matrices (about 1000 by 1000) and I want to add them together element-wise. The very naive way is using a temp variable and accumulates in a loop. For example,
summ=0;
for ii=1:20
for jj=1:20
summ=summ+ rand(400);
end
end
After searching on the Internet for some while, someone said it's better to do with the help of sum(). For example,
sump=zeros(400,400,400);
count=0;
for ii=1:20
for j=1:20
count=count+1;
sump(:,:,count)=rand(400);
end
end
sum(sump,3);
However, after I tested two ways, the result is
Elapsed time is 0.780819 seconds.
Elapsed time is 1.085279 seconds.
which means the second method is even worse.
So I am just wondering if there any effective way to do addition? Assume that I am working on a computer with very large memory and a GTX 1080 (CUDA might be helpful but I don't know whether it's worthy to do so since communication also takes time.)
Thanks for your time! Any reply will be highly appreciated!.
The fastes way is to not use any loops in matlab at all.
In many cases, the internal functions of matlab all well optimized to use SIMD or other acceleration techniques.
An example for using the build in functionalities to create matrices of the desired size is X = rand(sz1,...,szN).
In your explicit case sum(rand(400,400,400),3) should give you then the fastest result.
This might sound a little like a beginner's question but bear with me.
I have two large matrices A and B with the dimensions (1502x128x128 complex double)
I am trying to add them together but its seems like a forever lasting processes for some reasons.
I was wondering if you could direct me to a faster way of doing this
So far I have tried the following two scripts:
First:
C=zeros(1502,128,128);
C=[A+B]
Second:
C=zeros(1502,128,128);
for ss=1:128
C(:,:,ss)=squeeze(A(:,:,ss))+squeeze(B(:,:,ss));
end
Is it slow because it is complex and there is no way around that or what are your thoughts on this.
Thank you
How much free memory do you have?
A = rand(1502,128,128)+1i*rand(1502,128,128);
B = rand(1502,128,128)+1i*rand(1502,128,128);
tic
C = A+B;
toc
takes:
Elapsed time is 0.211576 seconds.
and it occupies about 1.2 GB RAM.
I can't imagine another reason than memory issues.
Preallocating (C=zeros(1502,128,128);) is not necessary in this case. But you could try to clear C at the beginning.
I've read some similar posts, while none of them actually tackled my problem.
I need to do a series of multiplication-similar operations for A, B, specifically calculating kernel matrices, on Windows Platform. While, the problem is both of A, B could be really large, let us say, 20000-by-360000. While, my server can only provide 96 GB memory. It may seem infeasible to have them in memory at the same time and do the calculation. So is there any good way to efficiently handle such a large multiplication? Btw, The size of result, which is 20000-by-20000, is much less than the multiplier and can fit in the memory properly.
Because I do it on Windows, it may be not feasible to call functions like mmap2.
I wonder whether converting them into sparse matrix is a good option. However, it may heavily depend on the properties of data.
Another solution I've come up with is to partition the origin matrix into blocks. Then do the calculation block-by-block.
Is there any other better solution? Any practical suggestions would be really appreciated.
Best regards,
Peiyun
If I where you I'd look into the block processing function:
B = blockproc(filename,[M N],fun)
and use the Destination parameter to allow saving the results without overflowing your memory.
I know matlab has a built in pdist function that will calculate pairwise distances. However, my matrix is so large that its 60000 by 300 and matlab runs out of memory.
This question is a follow up on Matlab euclidean pairwise square distance function.
Is there any workaround for this computational inefficiency. I tried manually coding the pairwise distance calculations and it usually takes a full day to run (sometimes 6 to 7 hours).
Any help is greatly appreciated!
Well, I couldn't resist playing around. I created a Matlab mex C file called pdistc that implements pairwise Euclidean distance for single and double precision. On my machine using Matlab R2012b and R2015a it's 20–25% faster than pdist(and the underlying pdistmex helper function) for large inputs (e.g., 60,000-by-300).
As has been pointed out, this problem is fundamentally bounded by memory and you're asking for a lot of it. My mex C code uses minimal memory beyond that needed for the output. In comparing its memory usage to that of pdist, it looks like the two are virtually the same. In other words, pdist is not using lots of extra memory. Your memory problem is likely in the memory used up before calling pdist (can you use clear to remove any large arrays?) or simply because you're trying to solve a big computational problem on tiny hardware.
So, my pdistc function likely won't be able to save you memory overall, but you may be able to use another feature I built in. You can calculate chunks of your overall pairwise distance vector. Something like this:
m = 6e3;
n = 3e2;
X = rand(m,n);
sz = m*(m-1)/2;
for i = 1:m:sz-m
D = pdistc(X', i, i+m); % mex C function, X is transposed relative to pdist
... % Process chunk of pairwise distances
end
This is considerably slower (10 times or so) and this part of my C code is not optimized well, but it will allow much less memory use – assuming that you don't need the entire array at one time. Note that you could do the same thing much more efficiently with pdist (or pdistc) by creating a loop where you passed in subsets of X directly, rather than all of it.
If you have a 64-bit Intel Mac, you won't need to compile as I've included the .mexmaci64 binary, but otherwise you'll need to figure out how to compile the code for your machine. I can't help you with that. It's possible that you may not be able to get it to compile or that there will be compatibility issues that you'll need to solve by editing the code yourself. It's also possible that there are bugs and the code will crash Matlab. Also, note that you may get slightly different outputs relative to pdist with differences between the two in the range of machine epsilon (eps). pdist may or may not do fancy things to avoid overflows for large inputs and other numeric issues, but be aware that my code does not.
Additionally, I created a simple pure Matlab implementation. It is massively slower than the mex code, but still faster than a naïve implementation or the code found in pdist.
All of the files can be found here. The ZIP archive includes all of the files. It's BSD licensed. Feel free to optimize (I tried BLAS calls and OpenMP in the C code to no avail – maybe some pointer magic or GPU/OpenCL could further speed it up). I hope that it can be helpful to you or someone else.
On my system the following is the fastest (Even faster than the C code pdistc by #horchler):
function [ mD ] = CalcDistMtx ( mX )
vSsqX = sum(mX .^ 2);
mD = sqrt(bsxfun(#plus, vSsqX.', vSsqX) - (2 * (mX.' * mX)));
end
You'll need a very well tuned C code to beat this, I think.
Update
Since MATLAB R2016b MATLAB supports implicit broadcasting without the use of bsxfun().
Hence the code can be written:
function [ mD ] = CalcDistMtx ( mX )
vSsqX = sum(mX .^ 2, 1);
mD = sqrt(vSsqX.'+ vSsqX - (2 * (mX.' * mX)));
end
A generalization is given in my Calculate Distance Matrix project.
P. S.
Using MATLAB's pdist for comparison: squareform(pdist(mX.')) is equivalent to CalcDistMtx(mX).
Namely the input should be transposed.
Computers are not infinitely large, or infinitely fast. People think that they have a lot of memory, a fast CPU, so they just create larger and larger problems, and then eventually wonder why their problem runs slowly. The fact is, this is NOT computational inefficiency. It is JUST an overloaded CPU.
As Oli points out in a comment, there are something like 2e9 values to compute, even assuming you only compute the upper or lower half of the distance matrix. (6e4^2/2 is approximately 2e9.) This will require roughly 16 gigabytes of RAM to store, assuming that only ONE copy of the array is created in memory. If your code is sloppy, you might easily double or triple that. As soon as you go into virtual memory, things get much slower.
Wanting a big problem to run fast is not enough. To really help you, we need to know how much RAM is available. Is this a virtual memory issue? Are you using 64 bit MATLAB, on a CPU that can handle all the needed RAM?
I'm trying to write a simple generic parallel code for minimizing a function in MATLAB. The idea is very simple, essentially:
parfor k = 1:N
(...find a good solution xcurrent with cost fcurrent ... )
% keep best current value
fmin = min(fmin,fxcurrent)
end
This works fine, because fmin is a reduction variable, and thus I can use this construction to update the current best value.
However, I couldn't find a nice elegant way of keeping (or storing) the best current solution ("xcurrent").
How do I keep track of the best solution found so far?
In other words, if the current value is strictly smaller than fmin, how can I save xcurrent (subject to the constraints that parallel loops impose in MATLAB)?
[Of course, the serial version is trivial, just prepend
if fxcurrent < fmin;
xbest = xcurrent;
end;
but this does not work on a parfor loop.]
A few approaches that come to mind:
I could just store all solutions and costs (using sliced variables), but this is hugely memory inefficient (the number of iterations N is very large, and the solutions themselves are very big).
Similarly, I could use a (set or matrix) reduction variable and do:
solutionset = [solutionset,xcurrent]
but this is almost as bad in terms of memory requirement.
I could also save xcurrent to disk every time the solution is improved.
I tried to look around for a simpler solution, but nothing was very useful.
The question seems to be well-defined (so it's not like in other problems, where the output could depend on iteration order), but I couldn't find an elegant way of doing this.
Apologies in advance if I'm missing something obvious, and thanks a lot in advance!
Thanks so I copy the suggestion down here.
Just an idea- what if you write your own reduction function - basically just containing the if block and a save or output?
You will presumably need to maintain multiple xcurrent structures in memory anyway, since there will have to be a separate copy for each worker executing the loop-body. I would try splitting your loop into an outer parallel part and an inner serial part -- this will allow you to adjust the number of copies of xcurrent separately to the total iteration count.
The inner (serial) loop can use the normal if fxcurrent < fmin; xmin = xcurrent; end construct to update its best solution, and the outer (parallel) loop can just store all solutions using slicing. As a final step you select the best solution from your (small) set.