Vectorize matrix multiplication - matlab

I have a school project about running a SOR algorithm on Octave, but mine is very inefficient. So I have this snippet of code:
for ii=1:n
r = 1/A(ii,ii);
for jj=1:n
if (ii!=jj)
A(ii,jj) = A(ii,jj)*r;
end;
end;
b(ii,1) = b(ii,1)*r;
x(ii,1) = b(ii,1);
end;
How can I vectorize this? My first attempt was this:
for ii=1:n
r = 1/A(ii,ii);
A(find(eye(length(A))!=1)) = A(find(eye(length(A))!=1))*r;
b(ii,1) = b(ii,1)*r;
x(ii,1) = b(ii,1);
end;
But I'm not sure it helped much. Is there a better and/or more efficient way to do it?
Thanks!

You can totally avoid loops I believe. You have to see that you are taking as the reciprocal of diagonal elements of A, then why do you want to use a loop. Do it directly. That's the first step. Remove Inf and now you want to multiply r to corresponding non-diagonal elements of corresponding rows, right?
Therefore, use repmat to construct such a matrix which will have replicated elements along columns, since you are multiplying same r to (1,2), (1,3), ... , (1,n). But R has non-zero diagonal elements. Therefore make them zero. Now you will get your A except that the diagonal elements will be zero. Therefore, you just have to add them back from original A. That can be done by A=A.*R+A.*eye(size(A,1)).
Vectorization comes from experience and most importantly analyzing your code. Think at each step whether you want to use the loop, if not replace that step with the equivalent command, other code will follow (for example, I constructed a matrix R, whereas you were constructing individual elements r. So I only thought about converting r -> R and then the rest of the code will just fall into place).
The code is as follows:
R=1./(A.*eye(size(A,1))); %assuming matrix A is square and it does not contain 0 on the main diagonal
R=R(~isinf(R));
R=R(:);
R1=R;
R=repmat(R,[1,size(A,2)]);
R=R.*(true(size(A,1))-eye(size(A,1)));
A=A.*R+A.*eye(size(A,1)); %code verified till here since A comes out to be the same
b = b.*R1;
x=b;

I suppose there are matrices:
A (NxN)
b (Nx1)
The code:
d = diag(A);
A = diag(1 ./ d) * A + diag(d - 1);
b = b ./ d;
x = b;

Randomly bumped onto this problem and at first glance looked interesting given the fact that the problem was tagged as a vectorization problem.
I was able to come up with a bsxfun based vectorized solution that also uses diagonal indexing. This solution seems to give me a 3-4x speedup over the loop code with decent sized inputs.
Assuming that you are still somewhat interested in seeing speedup improvement on this problem, I would be eager to know the kind of speedups you would be getting with it. Here's the code -
diag_ind = 1:size(A,1)+1:numel(A);
diag_A = A(diag_ind(:));
A = bsxfun(#rdivide,A,diag_A);
A(diag_ind) = diag_A;
b(:,1) = b(:,1)./diag_A;
x(:,1) = b(:,1);
Let me know!

Related

Multiply tensor with vector

A = ones(4,4,4);
b = [1,2,3,4];
I wish to multiply A with b in such a manner that,
ans(:,:,1) == ones(4,4)*b(1);
ans(:,:,2) == ones(4,4)*b(2);
etc.
I think you are looking for the following:
A = ones(4,4,4);
B = 1:4;
C = shiftdim(B,-1);
bsxfun(#times,A,C)
Shiftdim makes sure the vector is placed in the right dimension. Then bsxfun makes sure the vector gets expanded to match the matrix, after which they can be properly multiplied.
If you struggle to understand this function, you may justs want to use a loop over the entities of b as that should allow you to get this result as well.
In addition to Dennis' answer, you can combine permute and bsxfun like this:
bsxfun(#times, A, permute(b,[3 1 2]))
permute shifts the dimension of b so that it lies along the third dimension, and bsxfun makes sure the dimensions match when doing the multiplication.
I realize that you probably need to tweak this to make it fit your needs. Therefore, if you have a hard time understanding how bsxfun, permute, shiftdim etc. works, don't care about performance and don't intend using MATLAB the way it's supposed to be used... You can always do it using loops.
C = zeros(size(A));
for ii = 1:numel(b)
C(:,:,ii) = A(:,:,ii)*b(ii);
end

MATLAB: I want to threshold a matrix, based on thresholds in a vector, without a for loop. Possible?

Let us say I have the following:
M = randn(10,20);
T = randn(1,20);
I would like to threshold each column of M, by each entry of T. For example, find all indicies of all elements of M(:,1) that are greater than T(1). Find all indicies of all elements in M(:,2) that are greater than T(2), etc etc.
Of course, I would like to do this without a for-loop. Is this possible?
You can use bsxfun like this:
I = bsxfun(#gt, M, T);
Then I will be a logcial matrix of size(M) with ones where M(:,i) > T(i).
You can use bsxfun to do things like this, but it may not be faster than a for loop (more below on this).
result = bsxfun(#gt,M,T)
This will do an element wise comparison and return you a logical matrix indicating the relationship governed by the first argument. I have posted code below to show the direct comparison, indicating that it does return what you are looking for.
%var declaration
M = randn(10,20);
T = randn(1,20);
% quick method
fastres = bsxfun(#gt,M,T);
% looping method
res = false(size(M));
for i = 1:length(T)
res(:,i) = M(:,i) > T(i);
end
% check to see if the two matrices are identical
isMatch = all(all(fastres == res))
This function is very powerful and can be used to help speed up processes, but keep in mind that it will only speed things up if there is a lot of data. There is a bit of background work that bsxfun must do, which can actually cause it to be slower.
I would only recommend using it if you have several thousand data points. Otherwise, the traditional for-loop will actually be faster. Try it out for yourself by changing the size of the M and T variables.
You can replicate the threshold vector and use matrix comparison:
s=size(M);
T2=repmat(T, s(1), 1);
M(M<T2)=0;
Indexes=find(M);

Calculating all two element sums of [a1 a2 a3 ... an] in Matlab

Context: I'm working on Project Euler Problem 23 using Matlab in order to practice my barely existing programming skills.
My Problem:
Now I have a vector with roughly 6500 numbers (ranging from 12 to 28122) as elements and want to calculate all the two element sums. That is I only need one instance of every sum, so having calculated a1 + an it's not necessary to calculate an + a1.
Edit for clarification: This includes the sums a1+a1, a2+a2,..., an+an.
The problem is that this is much too slow.
Problem specific constraints:
It's a given that sums 28123 or over aren't necessary to calculate, since those can't be used to solve the problem further.
My approach:
AbundentNumberSumsRaw=[];
for i=1:3490
AbundentNumberSumsRaw=[AbundentNumberSumRaw AbundentNumbers(i)+AbundentNumbers(i:end);
end
This works terribly :p
My Comments:
I'm pretty sure that incrementally increasing the vector AbundentNumbersRaw is bad coding, since that means memory usage will spike unnecessarily. I haven't done so, since a) I don't know what size vector to pre-allocate and b) I couldn't come up with a way to inject the sums into AbundentNumbersRaw in a orderly manner without using some ugly looking nested loops.
"for i=1:3490" is lower than the numbers of elements simply because I checked and saw that all the resulting sums for numbers whose index are above 3490 would be too large for me to use anyway.
I'm pretty sure my main issue is that the program need to do a lot of incremental increases of the vector AbundentNumbersRaw.
Any and all help and suggestions would be much appreciated :)
Cheers
Rasmus
Suppose
a = 28110*rand(6500,1)+12;
then
sums = [
a(1) + a(1:end)
a(2) + a(2:end)
...
];
is the calculation you're after.
You also state that sums whose value goes over 28123 should be discarded.
This can be generalized like so:
% Compute all 2-element sums without repetitions
C = arrayfun(#(x) a(x)+a(x:end), 1:numel(a), 'uniformoutput', false);
C = cat(1, C{:});
% discard sums exceeding threshold
C(C>28123) = [];
or using a loop
% Compute all 2-element sums without repetitions
E = cell(numel(a),1);
for ii = 1:numel(a)
E{ii} = a(ii)+a(ii:end); end
E = cat(1, E{:});
% discard sums exceeding threshold
E(E>28123) = [];
Simple testing shows that arrayfun is somewhat faster than the loop, so I'd go for the arrayfun option.
As your primary problem is to find out, which integers in a given set can be written as the sum of two integers of a different set, I'd choose a different approach:
AbundantNumbers = 1:6500; % replace with the list you generated somewhere else
maxInteger = 28122;
AbundantNumberSum(1:maxInteger) = true; % logical array
for i = 1:length(AbundantNumbers)
sumIndices = AbundantNumbers(i) + AbundantNumbers;
AbundantNumberSum(sumIndices(sumIndices <= maxInteger)) = false;
end
Unfortunantely, this is not an answer to your question but to your problem ;-) For the MatLab way to solve your original question, see the elegant answer of Rody Oldenhuis.
My approach would be the following:
v = 1:3490; % your vector here
s = length(v);
result = zeros(s); % preallocate memory
for m = 1:s
result(m,m:end) = v(m)+v(m:end);
end
You will get a matrix of 3490 x 3490 elements and more than half of them 0.

How to perform a column by column circular shift of a matrix without a loop

I need to circularly shift individual columns of a matrix.
This is easy if you want to shift all the columns by the same amount, however, in my case I need to shift them all by a different amount.
Currently I'm using a loop and if possible I'd like to remove the loop and use a faster, vector based, approach.
My current code
A = randi(2, 4, 2);
B = A;
for i = 1:size( A,2 );
d = randi( size( A,1 ));
B(:,i) = circshift( A(:,i), [d, 0] );
end
Is is possible to remove the loop from this code?
Update I tested all three methods and compared them to the loop described in this question. I timed how long it would take to execute a column by column circular shift on a 1000x1000 matrix 100 times. I repeated this test several times.
Results:
My loop took more than 12 seconds
Pursuit's suggestion less than a seconds
Zroth's orginal answer took just over 2 seconds
Ansari's suggest was slower than the original loop
Edit
Pursuit is right: Using a for-loop and appropriate indexing seems to be the way to go here. Here's one way of doing it:
[m, n] = size(A);
D = randi([0, m - 1], [1, n]);
B = zeros(m, n);
for i = (1 : n)
B(:, i) = [A((m - D(i) + 1 : m), i); A((1 : m - D(i) ), i)];
end
Original answer
I've looked for something similar before, but I never came across a good solution. A modification of one of the algorithms used here gives a slight performance boost in my tests:
[m, n] = size(A);
mtxLinearIndices ...
= bsxfun(#plus, ...
mod(bsxfun(#minus, (0 : m - 1)', D), m), ...
(1 : m : m * n));
C = A(idxs);
Ugly? Definitely. Like I said, it seems to be slightly faster (2--3 times faster for me); but both algorithms are clocking in at under a second for m = 3000 and n = 1000 (on a rather old computer, too).
It might be worth noting that, for me, both algorithms seem to outperform the algorithm provided by Ansari, though his answer is certainly more straightforward. (Ansari's algorithm's output does not agree with the other two algorithms for me; but that could just be a discrepancy in how the shifts are being applied.) In general, arrayfun seems pretty slow when I've tried to use it. Cell arrays also seem slow to me. But my testing might be biased somehow.
Not sure how much faster this would be, but you could try this:
[nr, nc] = size(A);
B = arrayfun(#(i) circshift(A(:, i), randi(nr)), 1:nc, 'UniformOutput', false);
B = cell2mat(B);
You'll have to benchmark it, but using arrayfun may speed it up a little bit.
I suspect, your circular shifting, operations on the random integer matrix donot make it any more random since the numbers are uniformly distributed.
So I hope your question is using randi() for demonstration purposes only.

Apply function to every pair of columns in two matrices in MATLAB

In MATLAB, I'd like to apply a function to every pair of column vectors in matrices A and B. I know there must be an efficient (non for) way of doing this, but I can't figure it out. The function will output a scalar.
Try
na = size(A,1);
nb = size(B,1);
newvector = bsxfun(#(j,k)(func(A(j,:),B(k,:))),1:na,(1:nb)');
bsxfun performs singleton expansion on 1:na and (1:nb)'. The end result, in this case, is that func will be applied to every pair of column vectors drawn from A and B.
Note that bsxfun can be tricky: it can require that the applied function support singleton expansion itself. In this case it will work to do the job you want.
Do you mean pairwise? So in a for-loop the function will work as scalar_val = func(A(i),B(i))?
If A and B have the same size you can apply ARRAYFUN function:
newvector = arrayfun(#(x) func(A(x),B(x)), 1:numel(A));
UPDATE:
According your comment you need to run all combinations of A and B as scalar_val = func(A(i), B(j)). This is a little more complicated and for large vectors can fill the memory quickly.
If your function is one of standard you can try using BSXFUN:
out = bsxfun(#plus, A, B');
Another way is to use MESHGRID and ARRAYFUN:
[Am, Bm] = meshgrid(A,B);
out = arrayfun(#(x) func(Am(x),Bm(x)), 1:numel(Am));
out = reshape(out, numel(A), numel(B));
I believe it should work, but I don't have time to test it now.