This question already has answers here:
element by element matrix multiplication in Matlab
(2 answers)
Closed 7 years ago.
I am trying to multiply 2 matrices in Matlab, but they don't have the same dimension. In fact, my multiplication isn't quite the standard matrix multiplication.
I have a 31-by-1 matrix (or vector), and a 31-by-512-by-512 matrix.
I would like to take the 1st element of my vector and multiply the 1st 512-by-512 slice with it and so on, resulting in a new 31-by-512-by-512 array.
But I don't want to use for loops for performance issues.
This is the straightforward use-case of bsxfun:
bsxfun(#times, v, M)
Or you might have to permute you vector, v, so that its singelton dimension is orthogonal the direction you want to expand over (in your case it's actually along dimension one and two), i.e. turn v into a 31-by-1-by-1 (although I'm not certain if this is necessary, try it if you get errors) as in your case you will be expanding along the third dimension:
bsxfun(#times, permute(v,[1,3,2]), M)
Note that another common way to do this is using repmat and .* but bsxfun is more efficient.
Related
so I'm trying to study a code written in MATLAB. And there are these two strange lines of code, which I can't seem to understand, maybe someone could help me out? I'm new to MATLAB, I'm coding in C# most of the time.
As far as I know diag(A) means that it takes the members of main diagonal of matrix A. But what about the other parts of the line? Especially the 1./ operation, what does it do?
In the code below
A is a 4x4 matrix, which stores double type values, b is the coefficients vector and alpha is a freely chosen vector (10, 5, 4, 2).
Atld=diag(1./diag(A))*A-diag(alpha)
btld=diag(1./diag(A))*b
diag(A) returns a vector with the diagonal elements of matrix A
./ is the element-wise division operator, so 1./diag(A) inverts the elements from this vector.
diag(1./diag(A)) returns the diagonal matrix from that vector
So, basically, diag(1./diag(A)) is a matrix with the inverse of the diagonal of A on its diagonal, and zeros everywhere else.
I have a 3-Dimensional matrix in Matlab (2 x 2 x 56 complex double).
I want to ignore/remove the 1st dimension (2) so the remaining part will be (2x56) and then use it to do some plotting. How can I do that?
I have come through squeeze (which actually removes the singleton) so it doesn't work here. Anything similar to it?
This question already has answers here:
Multiply a 3D matrix with a 2D matrix
(10 answers)
Closed 5 years ago.
I try to use svd in my work which will return the vector as below.
A=rand(10,5);
b=rand(10,5,100);
[U, S , V]=svd(A);
What I'm trying to do is to multiply each slice of b with U. With one specific slice, this operation is valid:
c=U'*b(:,:,1);
However when I try to use a vectorized method as below, it return array dimensions mismatch error.
Utran=U.';
c = cellfun(#(x) x.*b,num2cell(Utran,[1 2]),'UniformOutput',false);
I could probably use loop for the first method, but it's not efficient if I have a large matrix. Any idea what's my mistake here?
The following is a vectorized solution.
Not sure if it's faster than using loops; and even if it is, note that it uses more memorry for intermediate computations:
result = permute(sum(permute(conj(U), [1 3 4 2]).*b), [4 2 3 1]);
I have two matrices X and Y, both of order mxn. I want to create a new matrix O of order mxm such that each i,j th entry in this new matrix is computed by applying a function to ith and jth row of X and Y respectively. In my case m = 10000 and n = 500. I tried using a loop but it takes forever. Is there an efficient way to do it?
I am targeting two functions dot product -- dot(row_i, row_j) and exp(-1*norm(row_i-row_j)). But I was wondering if there is a general way so that I can plugin any function.
Solution #1
For the first case, it looks like you can simply use matrix multiplication after transposing Y -
X*Y'
If you are dealing with complex numbers -
conj(X*ctranspose(Y))
Solution #2
For the second case, you need to do a little more work. You need to use bsxfun with permute to re-arrange dimensions and employ the raw form of norm calculations and finally squeeze to get a 2D array output -
squeeze(exp(-1*sqrt(sum(bsxfun(#minus,X,permute(Y,[3 2 1])).^2,2)))
If you would like to avoid squeeze, you can use two permute's -
exp(-1*sqrt(sum(bsxfun(#minus,permute(X,[1 3 2]),permute(Y,[3 1 2])).^2,3)))
I would also advise you to look into this problem - Efficiently compute pairwise squared Euclidean distance in Matlab.
In conclusion, there isn't a common most efficient way that could be employed for every function to ith and jth row of X. If you are still hell bent on that, you can use anonymous function handles with bsxfun, but I am afraid it won't be the most efficient technique.
For the second part, you could also use pdist2:
result = exp(-pdist2(X,Y));
I have a doubt about SVD. in the literature that i had read, it's written that we have to convert our input matrix into covariance matrix first, and then SVD function from matlab (SVD) is used.
But, in Mathworks website we can use SVD function directly to the input matrix (no need to convert it into covariance matrix)..
[U,S,V]=svd(inImageD);
Which one is the true??
And if we want to do dimensionality reduction, we have to project our data into eigen vector.. But where is the eigen vector generated by SVD function..
I know that S is the eigen value.. But what is U and S??
To reduce our data dimensional, do we need to substract the input matrix with its mean and then multiply it with eigen vector?? or we can just multiply our input matrix with the eigen vector (no need to substract it first with its mean)..
EDIT
Suppose if I want to do classification using SIFT as the features and SVM as the classifier.
I have 10 images for training and I arrange them in a different row..
So 1st row for 1st images, 2nd row for second images and so on...
Feat=[1 2 5 6 7 >> Images1
2 9 0 6 5 >> Images2
3 4 7 8 2 >> Images3
2 3 6 3 1 >> Images4
..
.
so on. . ]
To do dimensionality reduction (from my 10x5 matrix), we have yo do A*EigenVector
And from what U had explained (#Sam Roberts), I can compute it by using EIGS function from the covariance matrix (instead of using SVD function).
And as I arrange the feat of images in different row, so I need to do A'*A
So it becomes:
Matrix=A'*A
MAT_Cov=Cov(Matrix)
[EigVector EigValue] = eigs (MAT_Cov);
is that right??
Eigenvector decomposition (EVD) and singular value decomposition (SVD) are closely related.
Let's say you have some data a = rand(3,4);. Note that this not a square matrix - it represents a dataset of observations (rows) and variables (columns).
Do the following:
[u1,s1,v1] = svd(a);
[u2,s2,v2] = svd(a');
[e1,d1] = eig(a*a');
[e2,d2] = eig(a'*a);
Now note a few things.
Up to the sign (+/-), which is arbitrary, u1 is the same as v2. Up to a sign and an ordering of the columns, they are also equal to e1. (Note that there may be some very very tiny numerical differences as well, due to slight differences in the svd and eig algorithms).
Similarly, u2 is the same as v1 and e2.
s1 equals s2, and apart from some extra columns and rows of zeros, both also equal sqrt(d1) and sqrt(d2). Again, there may be some very tiny numerical differences as well just due to algorithmic issues (they'll be on the order of something to the -10 or so).
Note also that a*a' is basically the covariances of the rows, and a'*a is basically the covariances of the columns (that's not quite true - a would need to be centred first by subtracting the column or row mean for them to be equal, and there might be a multiplicative constant difference as well, but it's basically pretty similar).
Now to answer your questions, I assume that what you're really trying to do is PCA. You can do PCA either by taking the original data matrix and applying SVD, or by taking its covariance matrix and applying EVD. Note that Statistics Toolbox has two functions for PCA - pca (in older versions princomp) and pcacov.
Both do essentially the same thing, but from different starting points, because of the above equivalences between SVD and EVD.
Strictly speaking, u1, v1, u2 and v2 above are not eigenvectors, they are singular vectors - and s1 and s2 are singular values. They are singular vectors/values of the matrix a. e1 and d1 are the eigenvectors and eigenvalues of a*a' (not a), and e2 and d2 are the eigenvectors and eigenvalues of a'*a (not a). a does not have any eigenvectors - only square matrices have eigenvectors.
Centring by subtracting the mean is a separate issue - you would typically do that prior to PCA, but there are situations where you wouldn't want to. You might also want to normalise by dividing by the standard deviation but again, you wouldn't always want to - it depends what the data represents and what question you're trying to answer.