A = ones(4,4,4);
b = [1,2,3,4];
I wish to multiply A with b in such a manner that,
ans(:,:,1) == ones(4,4)*b(1);
ans(:,:,2) == ones(4,4)*b(2);
etc.
I think you are looking for the following:
A = ones(4,4,4);
B = 1:4;
C = shiftdim(B,-1);
bsxfun(#times,A,C)
Shiftdim makes sure the vector is placed in the right dimension. Then bsxfun makes sure the vector gets expanded to match the matrix, after which they can be properly multiplied.
If you struggle to understand this function, you may justs want to use a loop over the entities of b as that should allow you to get this result as well.
In addition to Dennis' answer, you can combine permute and bsxfun like this:
bsxfun(#times, A, permute(b,[3 1 2]))
permute shifts the dimension of b so that it lies along the third dimension, and bsxfun makes sure the dimensions match when doing the multiplication.
I realize that you probably need to tweak this to make it fit your needs. Therefore, if you have a hard time understanding how bsxfun, permute, shiftdim etc. works, don't care about performance and don't intend using MATLAB the way it's supposed to be used... You can always do it using loops.
C = zeros(size(A));
for ii = 1:numel(b)
C(:,:,ii) = A(:,:,ii)*b(ii);
end
Related
Suppose I have 4D matrix:
>> A=1:(3*4*5*6);
>> A=reshape(A,3,4,5,6);
And now I want to cut given number of rows and columns (or any given chunks at known dimensions).
If I would know it's 4D I would write:
>> A1=A(1:2,1:3,:,:);
But how to write universally for any given number of dimensions?
The following gives something different:
>> A2=A(1:2,1:3,:);
And the following gives an error:
>> A2=A;
>> A2(3:3,4:4)=[];
It is possible to generate a code with general number of dimension of A using the second form of indexing you used and reshape function.
Here there is an example:
Asize = [3,4,2,6,4]; %Initialization of A, not seen by the rest of the code
A = rand(Asize);
%% This part of the code can operate for any matrix A
I = 1:2;
J = 3:4;
A1 = A(I,J,:);
NewSize = size(A);
NewSize(1) = length(I);
NewSize(2) = length(J);
A2 = reshape(A1,NewSize);
A2 will be your cropped matrix. It works for any Asize you choose.
I recommend the solution Luis Mendo suggested for the general case, but there is also a very simple solution when you know a upper limit for your dimensions. Let's assume you have at most 6 dimensions. Use 6 dimensional indexing for all matrices:
A1=A(1:2,1:3,:,:,:,:);
Matlab will implicit assume singleton dimensions for all remaining dimension, returning the intended result also for matrices with less dimensions.
It sounds like you just want to use ndims.
num_dimensions = ndims(A)
if (num_dimensions == 3)
A1 = A(1:2, 1:3, :);
elseif (num_dimensions == 4)
A1 = A(1:2, 1:3, :, :);
end
If the range of possible matrix dimensions is small this kind of if-else block keeps it simple. It seems like you want some way to create an indexing tuple (e.g. (1:2,:,:,:) ) on the fly, which I don't know if there is a way to do. You must match the correct number of dimensions with your indexing...if you index in fewer dimensions than the matrix has, matlab returns a value with the unindexed dimensions collapsed into a single array (similar to what you get with
A1 = A(:);
I need to multiply a matrix A with n matrices, and get n matrices back. For example, multiply a 2x2 matrix with 3 2x2 matrices stacked as a 2x2x3 Matlab array. bsxfun is what I usually use for such situations, but it only applies for element-wise operations.
I could do something like:
blkdiag(a, a, a) * blkdiag(b(:,:,1), b(:,:,2), b(:,:,3))
but I need a solution for arbitrary n - ?
You can reshape the stacked matrices. Suppose you have k-by-k matrix a and a stack of m k-by-k matrices sb and you want the product a*sb(:,:,ii) for ii = 1..m. Then all you need is
sza = size(a);
b = reshape( b, sza(2), [] ); % concatenate all matrices aloong the second dim
res = a * b;
res = reshape( res, sza(1), [], size(sb,3) ); % stack back to 3d
Your solution can be adapted to arbitrary size using comma-saparated lists obtained from cell arrays:
[k m n] = size(B);
Acell = mat2cell(repmat(A,[1 1 n]),k,m,ones(1,n));
Bcell = mat2cell(B,k,m,ones(1,n));
blkdiag(Acell{:}) * blkdiag(Bcell{:});
You could then stack the blocks on a 3D array using this answer, and keep only the relevant ones.
But in this case a good old loop is probably faster:
C = NaN(size(B));
for nn = 1:n
C(:,:,nn) = A * B(:,:,nn);
end
For large stacks of matrices and/or vectors over which to execute matrix multiplication, speed can start becoming an issue. To avoid re-inventing the wheel, you could simply compile and use the following fast MEX code:
MTIMESX - Mathworks.
As a rule of thumb, MATLAB is often quite inefficient at executing for loops over large numbers of operations which look like they should be vectorizable; I cannot think of a straightforward way of generalising Shai's answer to this case.
I have a school project about running a SOR algorithm on Octave, but mine is very inefficient. So I have this snippet of code:
for ii=1:n
r = 1/A(ii,ii);
for jj=1:n
if (ii!=jj)
A(ii,jj) = A(ii,jj)*r;
end;
end;
b(ii,1) = b(ii,1)*r;
x(ii,1) = b(ii,1);
end;
How can I vectorize this? My first attempt was this:
for ii=1:n
r = 1/A(ii,ii);
A(find(eye(length(A))!=1)) = A(find(eye(length(A))!=1))*r;
b(ii,1) = b(ii,1)*r;
x(ii,1) = b(ii,1);
end;
But I'm not sure it helped much. Is there a better and/or more efficient way to do it?
Thanks!
You can totally avoid loops I believe. You have to see that you are taking as the reciprocal of diagonal elements of A, then why do you want to use a loop. Do it directly. That's the first step. Remove Inf and now you want to multiply r to corresponding non-diagonal elements of corresponding rows, right?
Therefore, use repmat to construct such a matrix which will have replicated elements along columns, since you are multiplying same r to (1,2), (1,3), ... , (1,n). But R has non-zero diagonal elements. Therefore make them zero. Now you will get your A except that the diagonal elements will be zero. Therefore, you just have to add them back from original A. That can be done by A=A.*R+A.*eye(size(A,1)).
Vectorization comes from experience and most importantly analyzing your code. Think at each step whether you want to use the loop, if not replace that step with the equivalent command, other code will follow (for example, I constructed a matrix R, whereas you were constructing individual elements r. So I only thought about converting r -> R and then the rest of the code will just fall into place).
The code is as follows:
R=1./(A.*eye(size(A,1))); %assuming matrix A is square and it does not contain 0 on the main diagonal
R=R(~isinf(R));
R=R(:);
R1=R;
R=repmat(R,[1,size(A,2)]);
R=R.*(true(size(A,1))-eye(size(A,1)));
A=A.*R+A.*eye(size(A,1)); %code verified till here since A comes out to be the same
b = b.*R1;
x=b;
I suppose there are matrices:
A (NxN)
b (Nx1)
The code:
d = diag(A);
A = diag(1 ./ d) * A + diag(d - 1);
b = b ./ d;
x = b;
Randomly bumped onto this problem and at first glance looked interesting given the fact that the problem was tagged as a vectorization problem.
I was able to come up with a bsxfun based vectorized solution that also uses diagonal indexing. This solution seems to give me a 3-4x speedup over the loop code with decent sized inputs.
Assuming that you are still somewhat interested in seeing speedup improvement on this problem, I would be eager to know the kind of speedups you would be getting with it. Here's the code -
diag_ind = 1:size(A,1)+1:numel(A);
diag_A = A(diag_ind(:));
A = bsxfun(#rdivide,A,diag_A);
A(diag_ind) = diag_A;
b(:,1) = b(:,1)./diag_A;
x(:,1) = b(:,1);
Let me know!
Suppose I have vectors x,y,z, of lengths n,m,l. I want to create a cell matrix Q using the elements of those vectors. Naively one could use a for loop as so:
for i = 1:n
for j = 1:m
for k = 1:l
Q{i,j,k} = someFunction(x(i), y(j), z(k));
end
end
end
Each element of Q is a vector.
Is there a more elegant (and probably less slow) way to do this?
x=[1 2 3 4];
y=[5 6];
z=[7 8 9];
[X Y Z]=meshgrid(x,y,z);
someFunc = #(a,b,c)[a b c]; #% test function; use whatever you want
Q = arrayfun(someFunc,X,Y,Z,'UniformOutput',false);
Q{1,1,1} #% output: [1 5 7]
If someFunction is defined elsewhere, use arrayfun(#someFunction,X,Y,Z); to get a handle to it. (arrayfun uses each element of the arguments as args to the function handle you provide - it, and the related cellfun, are key in avoiding loops.)
With someFunction is designed this way, then it does not look possible.
You should change someFunction to take matrices and return a matrix. Then the problem becomes writing the specific someFunction using matrix operations. Altough a generic solution to the original problem seems not possible, when you consider a specific function (like I suggested here) it can be possible.
In MATLAB, I'd like to apply a function to every pair of column vectors in matrices A and B. I know there must be an efficient (non for) way of doing this, but I can't figure it out. The function will output a scalar.
Try
na = size(A,1);
nb = size(B,1);
newvector = bsxfun(#(j,k)(func(A(j,:),B(k,:))),1:na,(1:nb)');
bsxfun performs singleton expansion on 1:na and (1:nb)'. The end result, in this case, is that func will be applied to every pair of column vectors drawn from A and B.
Note that bsxfun can be tricky: it can require that the applied function support singleton expansion itself. In this case it will work to do the job you want.
Do you mean pairwise? So in a for-loop the function will work as scalar_val = func(A(i),B(i))?
If A and B have the same size you can apply ARRAYFUN function:
newvector = arrayfun(#(x) func(A(x),B(x)), 1:numel(A));
UPDATE:
According your comment you need to run all combinations of A and B as scalar_val = func(A(i), B(j)). This is a little more complicated and for large vectors can fill the memory quickly.
If your function is one of standard you can try using BSXFUN:
out = bsxfun(#plus, A, B');
Another way is to use MESHGRID and ARRAYFUN:
[Am, Bm] = meshgrid(A,B);
out = arrayfun(#(x) func(Am(x),Bm(x)), 1:numel(Am));
out = reshape(out, numel(A), numel(B));
I believe it should work, but I don't have time to test it now.