"Flatten" a 3D Matrix with L2 Norm Reduction - matlab

I have a n x m x d matrix A (i.e. A is like d n x m matrices). I would like to convert this into one n x m matrix B where each element B(i,j) is function of A(i,j,1), ..., A(i,j,d), more specifically the L2 norm of these values:
B(i,j) = sqrt[A(i,j,1)^2 + ... + A(i,j,d)^2]
Meaning I would like to condens or "flatten" the information in matrix A. How can I achieve this without resorting to a nested for loop?

Do elementwise squaring and sum along the third dimension to produce a N x M matrix and then apply square-root for a vectorized implementation, like so -
B = sqrt(sum(A.^2,3))

Related

Discatenate a column vector to get back to its original square matrix in MATLAB

I had to convert an n x n matrix to n^2 x 1 column vector for ease of some operations. Now, that the operations are done, how do I return to the n x n form from the n^2 x 1 vector.
It is supposed to be opposite of this: concatenation
Thanks!
You can use the reshape() function:
//M is your n^2 x 1 column vector, A is your nxn matrix that you want to recover
A = reshape(M, [n n])
If your n x n matrix is 3x3, then:
A = reshape(M, [3 3])
For more info: http://www.mathworks.com/help/matlab/ref/reshape.html

3D to 4D Matrix conversion Matlab

I have a m x n x p 3D matrix available where, m x n are 2D images (row * columns), and p is the number of images.
I need to make this matrix 4D such that the new dimensions are m x n x 1 x p. The third dimension is constant for each of the images.
How can I do this in MATLAB?
A call to permute should do the trick. Supposing that your image is stored in A, just do:
B = permute(A, [1 2 4 3]);
This transforms your matrix, which is m x n x p, to a matrix with a singleton third dimension while changing the third dimension from the original matrix so that it now becomes the fourth dimension.

I have N*M matrix and two 1*M row vectors I want to vectorize mathematical operations on them

I have N by M matrix A , a 1 by M matrix or a row vector B
and another 1 by M row vector C. Can I vectorize the following code more than that?
for i = 1:N
A(i,:) = (A(i,:)-B)./C;
end;
and what about more general case where we have K by M matrices(K divisible by N) instead of vectors
This is what bsxfun was designed to do:
A = bsxfun(#rdivide,bsxfun(#minus,A,B),C);
It will automatically expand the arrays of size [1 M] to be compatible with the one of size [N M], then perform the necessary array operations on them, returning an array of size [N M].
If your B and C arrays are of size [K M], then it's a bit more difficult. You didn't specify what the output should be shaped, but in the most general case you can compute "(A-B)/C" for every row of B and C and collect these matrices in an array of size [K N M]:
A = bsxfun(#rdivide,bsxfun(#minus,permute(A,[3 1 2]),permute(B,[1 3 2])),permute(C,[1 3 2]));
where A is transformed to an array of size [1 N M], and both B and C are transformed to size [K 1 M]. Depending on the size of your arrays along the various dimensions, you might benefit from putting M in front (since that's the dimension along which you're subtracting, but I'm not sure.
Unless you need raw speed, I would prefer a more explicit approach:
N = size(A, 1);
easy = (A-repmat(B, N, 1)) ./ repmat(C, N, 1);
repmat copies the first argument, the number of second argument times vertically (in rows), and they number of third argument times in columns. So, in this case, B is turned into N x M by replicating the vector N times vertically only.
For the more general case, where B & C are K x M, and N/K is an integer:
rowReps = size(A, 1)/size(B, 1);
notMuchHarder= (A - repmat(B, rowReps , 1) ./ repmat(C, rowReps, 1);

2-D convolution as a matrix-matrix multiplication [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I know that, in the 1D case, the convolution between two vectors, a and b, can be computed as conv(a, b), but also as the product between the T_a and b, where T_a is the corresponding Toeplitz matrix for a.
Is it possible to extend this idea to 2D?
Given a = [5 1 3; 1 1 2; 2 1 3] and b=[4 3; 1 2], is it possible to convert a in a Toeplitz matrix and compute the matrix-matrix product between T_a and b as in the 1-D case?
Yes, it is possible and you should also use a doubly block circulant matrix (which is a special case of Toeplitz matrix). I will give you an example with a small size of kernel and the input, but it is possible to construct Toeplitz matrix for any kernel. So you have a 2d input x and 2d kernel k and you want to calculate the convolution x * k. Also let's assume that k is already flipped. Let's also assume that x is of size n×n and k is m×m.
So you unroll k into a sparse matrix of size (n-m+1)^2 × n^2, and unroll x into a long vector n^2 × 1. You compute a multiplication of this sparse matrix with a vector and convert the resulting vector (which will have a size (n-m+1)^2 × 1) into a n-m+1 square matrix.
I am pretty sure this is hard to understand just from reading. So here is an example for 2×2 kernel and 3×3 input.
*
Here is a constructed matrix with a vector:
which is equal to .
And this is the same result you would have got by doing a sliding window of k over x.
1- Define Input and Filter
Let I be the input signal and F be the filter or kernel.
2- Calculate the final output size
If the I is m1 x n1 and F is m2 x n2 the size of the output will be:
3- Zero-pad the filter matrix
Zero pad the filter to make it the same size as the output.
4- Create Toeplitz matrix for each row of the zero-padded filter
5- Create a doubly blocked Toeplitz matrix
Now all these small Toeplitz matrices should be arranged in a big doubly blocked Toeplitz matrix.
6- Convert the input matrix to a column vector
7- Multiply doubly blocked toeplitz matrix with vectorized input signal
This multiplication gives the convolution result.
8- Last step: reshape the result to a matrix form
For more details and python code take a look at my github repository:
Step by step explanation of 2D convolution implemented as matrix multiplication using toeplitz matrices in python
If you unravel k to a m^2 vector and unroll X, you would then get:
a m**2 vectork
a ((n-m)**2, m**2) matrix for unrolled_X
where unrolled_X could be obtained by the following Python code:
from numpy import zeros
def unroll_matrix(X, m):
flat_X = X.flatten()
n = X.shape[0]
unrolled_X = zeros(((n - m) ** 2, m**2))
skipped = 0
for i in range(n ** 2):
if (i % n) < n - m and ((i / n) % n) < n - m:
for j in range(m):
for l in range(m):
unrolled_X[i - skipped, j * m + l] = flat_X[i + j * n + l]
else:
skipped += 1
return unrolled_X
Unrolling X and not k allows a more compact representation (smaller matrices) than the other way around for each X - but you need to unroll each X. You could prefer unrolling k depending on what you want to do.
Here, the unrolled_X is not sparse, whereas unrolled_k would be sparse, but of size ((n-m+1)^2,n^2) as #Salvador Dali mentioned.
Unrolling k could be done like this:
from scipy.sparse import lil_matrix
from numpy import zeros
import scipy
def unroll_kernel(kernel, n, sparse=True):
m = kernel.shape[0]
if sparse:
unrolled_K = lil_matrix(((n - m)**2, n**2))
else:
unrolled_K = zeros(((n - m)**2, n**2))
skipped = 0
for i in range(n ** 2):
if (i % n) < n - m and((i / n) % n) < n - m:
for j in range(m):
for l in range(m):
unrolled_K[i - skipped, i + j * n + l] = kernel[j, l]
else:
skipped += 1
return unrolled_K
The code shown above doesn't produce the unrolled matrix of the right dimensions. The dimension should be (n-k+1)*(m-k+1), (k)(k). k: filter dimension, n: num rows in input matrix, m: num columns.
def unfold_matrix(X, k):
n, m = X.shape[0:2]
xx = zeros(((n - k + 1) * (m - k + 1), k**2))
row_num = 0
def make_row(x):
return x.flatten()
for i in range(n- k+ 1):
for j in range(m - k + 1):
#collect block of m*m elements and convert to row
xx[row_num,:] = make_row(X[i:i+k, j:j+k])
row_num = row_num + 1
return xx
For more details, see my blog post:
http://www.telesens.co/2018/04/09/initializing-weights-for-the-convolutional-and-fully-connected-layers/

How to compute cosine similarity using two matrices

I have two matrices, A (dimensions M x N) and B (N x P). In fact, they are collections of vectors - row vectors in A, column vectors in B. I want to get cosine similarity scores for every pair a and b, where a is a vector (row) from matrix A and b is a vector (column) from matrix B.
I have started by multiplying the matrices, which results in matrix C (dimensions M x P).
C = A*B
However, to obtain cosine similarity scores, I need to divide each value C(i,j) by the norm of the two corresponding vectors. Could you suggest the easiest way to do this in Matlab?
The simplest solution would be computing the norms first using element-wise multiplication and summation along the desired dimensions:
normA = sqrt(sum(A .^ 2, 2));
normB = sqrt(sum(B .^ 2, 1));
normA and normB are now a column vector and row vector, respectively. To divide corresponding elements in A * B by normA and normB, use bsxfun like so:
C = bsxfun(#rdivide, bsxfun(#rdivide, A * B, normA), normB);
You can use scipy to compute it very easily.
from scipy.spatial import distance
cosine_sim = 1 - sp.distance.cdist(A, B, 'cosine')
All you need to do is pass your 2D matrices in above formula and spicy will return you numpy array.
Refer doc here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html