I want to make 1000 random permutations of a vector in matlab. I do it like this
% vector is A
num_A = length(A);
for i=1:1000
n = randperm(num_A);
A = A(n); % This is one permutation
end
This takes like 73 seconds. Is there any way to do it more efficiently?
Problem 1 - Overwriting the original vector inside loop
Each time A = A(n); will overwrite A, the input vector, with a new permutation. This might be reasonable since anyway you don't need the order but all the elements in A. However, it's extremely inefficient because you have to re-write a million-element array in every iteration.
Solution: Store the permutation into a new variable -
B(ii, :) = A(n);
Problem 2 - Using i as iterator
We at Stackoverflow are always telling serious Matlab users that using i and j as interators in loops is absolutely a bad idea. Check this answer to see why it makes your code slow, and check other answers in that page for why it's bad.
Solution - use ii instead of i.
Problem 3 - Using unneccessary for loop
Actually you can avoid this for loop at all since the iterations are not related to each other, and it will be faster if you allow Matlab do parallel computing.
Solution - use arrayfun to generate 1000 results at once.
Final solution
Use arrayfun to generate 1000 x num_A indices. I think (didn't confirm) it's faster than directly accessing A.
n = cell2mat(arrayfun(#(x) randperm(num_A), 1:1000', 'UniformOutput', false)');
Then store all 1000 permutations at once, into a new variable.
B = A(n);
I found this code pretty attractive. You can replace randperm with Shuffle. Example code -
B = Shuffle(repmat(A, 1000, 1), 2);
A = perms(num_A)
A = A(1:1000)
Perms returns all the different permutations, just take the first 1000 permutations.
Related
I have a large matrix, and I need to extract a small matrix taken from a sliding window which runs all over the large matrix, but during the operations the content of the extracted matrix does not change, so I'd like to extract the submatrix without creating a new copy but instead just acts like a C pointer that points to a portion of the large matrix. How can I do this? Please help me, thank you very much :)
I did some benchmarking to test if not using an explicit temporary matrix is faster, and it's probably not:
function move_mean(N)
M = randi(100,N);
window_size = [50 50];
dir_time = timeit(#() direct(M,window_size))
tmp_time = timeit(#() with_tmp(M,window_size))
end
function direct(M,window_size)
m = zeros(size(M)./2);
for r = 1:size(M,1)-window_size(1)
for c = 1:size(M,2)-window_size(2)
m(r,c) = mean(mean(M(r:r+window_size(1),c:c+window_size(2))));
end
end
end
function with_tmp(M,window_size)
m = zeros(size(M)./2);
for r = 1:size(M,1)-window_size(1)
for c = 1:size(M,2)-window_size(2)
tmp = M(r:r+window_size(1),c:c+window_size(2));
m(r,c) = mean(mean(tmp));
end
end
end
for M at size 100*100:
dir_time =
0.22739
tmp_time =
0.22339
So it's seems like using a temporary variable only makes your code readable, not slower.
In this answer I describe what is the 'best' solution in general. For this answer I define 'best' as most readable without a significant performance hit. (Partially shown by the existing answer).
Basically there are 2 situations that you may be in.
1. You use your submatrix several times
In this situation the best solution in general is to create a temporary variable containing the submatrix.
A = M(rmin:rmax, cmin:cmax)
There may be ways around it (defining a function/anonymous function that indexes into the matrix for you), but in general that won't make you happy.
2. You use your submatrix only 1 time
In this case the best solution is typically exactly what you referred to in the comments:
M(rmin:rmax, cmin:cmax)
A specific case of using the submatrix only 1 time, is when it is passed once to a function. Of course the contents of the submatrix may be used in that function several times, but that is irrelevant.
I have the following piece of code that is quite slow to compute the percentiles from a data set ("DATA"), because the input matrices are large ("Data" is approx. 500.000 long with 10080 unique values assigned from "Indices").
Is there a possibility/suggestions to make this piece of code more efficient? For example, could I somehow omit the for-loop?
k = 1;
for i = 0:0.5:100; % in 0.5 fractile-steps
FRACTILE(:,k) = accumarray(Indices,Data,[], #(x) prctile(x,i));
k = k+1;
end
Calling prctile again and again with the same data is causing your performance issues. Call it once for each data set:
FRACTILE=cell2mat(accumarray(Indices,Data,[], #(x) {prctile(x,[0:0.5:100])}));
Letting prctile evaluate your 201 percentiles in one call costs roughly as much computation time as two iterations of your original code. First because prctile is faster this way and secondly because accumarray is called only once now.
I have some code which uses sparse indexing (and there's no way that I can get around that). I run this in a function, and use it for two problems, where the sizes of all the variables involved do not change. However, for one problem, the sparse indexing part takes 5 seconds, and for the other, takes 25 seconds.
I checked the size of every variable involved, and they are the same for both problems. I also checked that xv is a full matrix for both problem types.
So, anyone else ever run into something weird like this? Any ideas as to why this would happen? Mainly I am trying to make the code more efficient, and while 5 seconds is ok for my particular application, 25 seconds (especially when I can't explain it) is very bad.
Edit: Here is a link to a photo that profiles this weird behavior. The runtime values were recorded on the third run to ensure that the size of X is also not changing. And I did check that xv is a dense (not sparse) matrix both times.
https://www.dropbox.com/s/i41j6afanzbjdyg/weird_bcd_thing.png?dl=0
Thanks so much for any help!
Code below (runs in a for loop). If I use ptype = 1, then it's 5 seconds, ptype = 3 is 25 seconds.
clvec = cliques{k};
xcurr = full(X(clvec));
xv = reshape(xcurr - Z(offset_index(k) + 1 : offset_index(k) + ncl^2),ncl,ncl);
%these two functions both take a dense symmetric matrix and return a dense symmetric matrix, and in both cases the size is the same for a given k.
if ptype == 1
xv = proj_PSD(xv,0,0);
elseif ptype == 3
xv = proj_Schoenberg(xv,0);
end
Xd = vec(xv) - xcurr;
%THIS IS THE WEIRD LINE
tic
X(clvec) = xv;
toc;
In the 'WEIRD LINE' : X(clvec) = xv;
You are using a random access to a sparse matrix.
This access in a sparse matrix is not constant and depends on its data. The time is may depend on the matrix values and the indices you are trying to access.
This is not the case in regular matrix, where you usually get a stable access time, and faster.
In order to assure a stable constant access try to change the implementation based on your specific matrix usage, try to avoid values assign by random access.
See next code for as a reference:
X = sparse(randi(100,50,1),randi(100,50,1),randn(1),100,100);
for i=1:10000
rand_inds{i} = randperm(10000,100);
end
for i=1:100
ti = tic;
X(rand_inds{i}) = 3;
to_X(i) = toc(ti);
end
Xf = full(X);
for i=1:100
ti = tic;
Xf(rand_inds{i}) = 3;
to_Xf(i) = toc(ti);
end
figure;plot(to_X);hold on;plot(to_Xf,'r');
I solved my problem! I'm posting the answer because I think it's interesting.
One thing I didn't mention in the question is that the loop goes from k = 1 to k = L, and for ptype = 3, we add one more step, and that's assigning all the diagonal indices to 0:
X(diag_index) = 0
where diag_index is computed ahead of time.
The problem is, instead of just assigning the values to 0, MATLAB will automatically discard these indices, and the next loop, when accessing diagonal indices, it has to re-allocate for X. So, I changed that line to
X(diag_index) = eps;
and now they both run equally fast! (It's not the best solution, since that's going to be a source of error later, but there's no more mystery!)
The answer is never what you think it would be...
Suppose you have 5 vectors: v_1, v_2, v_3, v_4 and v_5. These vectors each contain a range of values from a minimum to a maximum. So for example:
v_1 = minimum_value:step:maximum_value;
Each of these vectors uses the same step size but has a different minimum and maximum value. Thus they are each of a different length.
A function F(v_1, v_2, v_3, v_4, v_5) is dependant on these vectors and can use any combination of the elements within them. (Apologies for the poor explanation). I am trying to find the maximum value of F and record the values which resulted in it. My current approach has been to use multiple embedded for loops as shown to work out the function for every combination of the vectors elements:
% Set the temp value to a small value
temp = 0;
% For every combination of the five vectors use the equation. If the result
% is greater than the one calculated previously, store it along with the values
% (postitions) of elements within the vectors
for a=1:length(v_1)
for b=1:length(v_2)
for c=1:length(v_3)
for d=1:length(v_4)
for e=1:length(v_5)
% The function is a combination of trigonometrics, summations,
% multiplications etc..
Result = F(v_1(a), v_2(b), v_3(c), v_4(d), v_5(e))
% If the value of Result is greater that the previous value,
% store it and record the values of 'a','b','c','d' and 'e'
if Result > temp;
temp = Result;
f = a;
g = b;
h = c;
i = d;
j = e;
end
end
end
end
end
end
This gets incredibly slow, for small step sizes. If there are around 100 elements in each vector the number of combinations is around 100*100*100*100*100. This is a problem as I need small step values to get a suitably converged answer.
I was wondering if it was possible to speed this up using Vectorization, or any other method. I was also looking at generating the combinations prior to the calculation but this seemed even slower than my current method. I haven't used Matlab for a long time but just looking at the number of embedded for loops makes me think that this can definitely be sped up. Thank you for the suggestions.
No matter how you generate your parameter combination, you will end up calling your function F 100^5 times. The easiest solution would be to use parfor instead in order to exploit multi-core calculation. If you do that, you should store the calculation results and find the maximum after the loop, because your current approach would not be thread-safe.
Having said that and not knowing anything about your actual problem, I would advise you to implement a more structured approach, like first finding a coarse solution with a bigger step size and narrowing it down successivley by reducing the min/max values of your parameter intervals. What you have currently is the absolute brute-force method which will never be very effective.
To begin, this problem is easily solvable with a for-loop. However, I'm trying to force/teach myself to think vector-wise to take advantage of what Matlab does best.
Simplified, here is the problem explanation:
I have a vector with data in it.
I have a 2xN array of start/stop indices that represent ranges of interesting data in the vector.
I want to perform calculations on each of those ranges, resulting in a number (N results, corresponding to each start/stop range.)
In code, here's a pseudoexample of what I'd like to have at the end:
A = 1:10000;
startIndicies = [5 100 1000];
stopIndicies = [10 200 5000];
...
calculatedResults = [func(A(5:10)) func(A(100:200)) func(A(1000:5000))]
The length of A, and of the start/stop index array is variable.
Like I said, I can easily solve this with a for loop. However since could be used with a large data set, I'd like to know if there's a good solution without a for loop.
Here is one possible solution, although, I won't call it a fully vectorized solution, rather a one liner one.
out = cellfun(#(i,j) fun(A(i:j)), num2cell(startIndicies), num2cell(stopIndicies) );
or, if you plan to have homogeneous outputs,
out = arrayfun(#(i,j) fun(A(i:j)), startIndicies, stopIndicies);