bias dimension definition in coding neural network - neural-network

In the following Figure showing the code for defining the dimension of the bias b1 term, I wonder why the first dimension of bias b1 is not the batch size? Does it mean then it just assumes this bias is applied to all batches then?
If I specify the bias b1 dimension to be (batch_size, 256) then does it mean i am applying a different b1 to different batch? But theoretically it should still work right? Also what is the difference between tensor (256), (256,) and (256,1)...?
Figure: dimension definition of nn

The weights and biases of your neural network layer are not specified in terms of batch size.
eg: w1 = torch.randn(784,256) :
This is a 2D matrix you're going to use for a matrix multiply.
784 is the dimension of your input image without considering batch size. (I'm guessing this is for mnist? it looks like you're flattening the 2d images to a 1d vector so 28*28=784).
256 is your output dimension of your output (how many logits you're using)
Similarly, b1 = torch.randn(256):
This is an 1D vector you're just adding to the logits.
256 is the dimension of the logits
Pytorch automaticallly broadcasts (repeats) these over the batch dimension for all your operations, so it doesn't matter what the batch size was.
Eg. eg in adding, b1 is automatically repeated over the first dimension, so it's actual shape for the add is (batch_size, 256).
By convention, pytorch "aligns" dimensions from right to left.
if any are missing, it then repeats the tensor over the missing dimension
If any dimension is 1, it repeats the tensor over that dimension to match the other operand.
Eg (copied from the docs on broadcasting)
>>> x=torch.empty(5,7,3)
>>> y=torch.empty(5,7,3)
# same shapes are always broadcastable (i.e. the above rules always hold)
>>> x=torch.empty((0,))
>>> y=torch.empty(2,2)
# x and y are not broadcastable, because x does not have at least 1 dimension
# can line up trailing dimensions
>>> x=torch.empty(5,3,4,1)
>>> y=torch.empty( 3,1,1)
# x and y are broadcastable.
# 1st trailing dimension: both have size 1
# 2nd trailing dimension: y has size 1
# 3rd trailing dimension: x size == y size
# 4th trailing dimension: y dimension doesn't exist
# but:
>>> x=torch.empty(5,2,4,1)
>>> y=torch.empty( 3,1,1)
# x and y are not broadcastable, because in the 3rd trailing dimension 2 != 3
This is really convenient because it means you don't have to redefine your neural net every time you want to use a different batch_size
Here's a link if you want to learn more about broadcasting in pytorch
Also what is the difference between tensor (256), (256,) and (256,1)
the first two are exactly the same; python generally allows for trailing commas in tuple expressions. You are creating a 1D vector of 256 elements.
The last one is different; you are creating a 2D tensor where the first dimension is 256 and the second dimension is 1. The underlying data is the same, and it doesn't matter as long as you're consistent about which you're using, but if you mix them, it often leads to undesired behavior:
Eg:
a = torch.randn(256)
b = torch.randn(256)
c = a + b
c.shape
>>> torch.Size([256])
Simple: they just add element-wise.
But notice what happens when one of them is shaped (-1,1):
b = b.view(-1,1) # -1 here means torch will infer the shape of this dimension based on the known size of the tensor and all other specified dimensions
b.shape
>>> torch.Size([256, 1])
c = a + b
Now because of broadcasting rules
a is repeated over the first dimension so it has the same number of dimensions as b, so it automatically interpretes a as tensor(256,256)
b is repeated so it's last dimension (1) now matches the dimension of a (256)
so:
c.shape
>>> torch.Size([256, 256])
Hint: The broadcasting rules can be hard to remember, and are often the source of bugs. When in doubt about tensor shapes, it's worth running your code in an interpreter line by line with dummy data and just checking what the shape of each tensor is eg print(torch.mm(input,w1).shape)

Related

Does my Neural Net Vector Input Size Need to Match the Output Size?

I’m trying to use a Neural Network for purposes of binary classification. It consist of three layers. The first layer has three input neurons, the hidden layer has two neurons, and the output layer has three neurons that output a binary value of 1 or 0. Actually the output is usually a floating point number, but it typically rounds up to a whole number.
If the network only outputs vectors of 3, then shouldn't my input vectors be the same size? Otherwise, for classification, how else do you map the output to the input?
I wrote the neural network in Excel using VBA based on the following article: https://www.analyticsvidhya.com/blog/2017/05/neural-network-from-scratch-in-python-and-r/
So far it works exactly as described in the article. I don’t have access to a machine learning library at the moment so I’ve chosen to give this a try.
For example:
If the output of the network is [n, n ,n], does that mean that my input data has to be [n, n, n] also?
From what I read in here: Neural net input/output
It seems that's the way it should be. I'm not entirely sure though.
To speak simple,
for regression task, your output usually has the dimension [1] (if you predict single value).
For the classification task, your output should have the same number of dimensions equal to the number of classes you have (outputs are probabilities, the sum of them = 1).
So, there is no need to have equal dimensions of input and output. NN is just a projection of one dimension to another.
For example,
regression, we predict house prices: input is [1, 10] (to features of the property), the output is [1] - price
classification, we predict class (will be sold or not): input is [1, 11] (same features + listed price), output is [1, 2] (probability of class 0 (will be not sold) and 1 (will be sold); for example, [1; 0], [0; 1] or [0.5; 0.5] and so on; it is binary classification)
Additionally, equality of input-output dimensions exists in more specific tasks, for example, autoencoder models (when you need to present your data in other dimension and then represent it back, to the original dimension).
Again, the output dimension is the size of outputs for 1 batch. Only one, not of the whole dataset.

Computing the SVD of a rectangular matrix

I have a matrix like M = K x N ,where k is 49152 and is the dimension of the problem and N is 52 and is the number of observations.
I have tried to use [U,S,V]=SVD(M) but doing this I get less memory space.
I found another code which uses [U,S,V]=SVD(COV(M)) and it works well. My questions are what is the meaning of using the COV(M) command inside the SVD and what is the meaning of the resultant [U,S,V]?
Finding the SVD of the covariance matrix is a method to perform Principal Components Analysis or PCA for short. I won't get into the mathematical details here, but PCA performs what is known as dimensionality reduction. If you like a more formal treatise on the subject, you can read up on my post about it here: What does selecting the largest eigenvalues and eigenvectors in the covariance matrix mean in data analysis?. However, simply put dimensionality reduction projects your data stored in the matrix M onto a lower dimensional surface with the least amount of projection error. In this matrix, we are assuming that each column is a feature or a dimension and each row is a data point. I suspect the reason why you are getting more memory occupied by applying the SVD on the actual data matrix M itself rather than the covariance matrix is because you have a significant amount of data points with a small amount of features. The covariance matrix finds the covariance between pairs of features. If M is a m x n matrix where m is the total number of data points and n is the total number of features, doing cov(M) would actually give you a n x n matrix, so you are applying SVD on a small amount of memory in comparison to M.
As for the meaning of U, S and V, for dimensionality reduction specifically, the columns of V are what are known as the principal components. The ordering of V is in such a way where the first column is the first axis of your data that describes the greatest amount of variability possible. As you start going to the second columns up to the nth column, you start to introduce more axes in your data and the variability starts to decrease. Eventually when you hit the nth column, you are essentially describing your data in its entirety without reducing any dimensions. The diagonal values of S denote what is called the variance explained which respect the same ordering as V. As you progress through the singular values, they tell you how much of the variability in your data is described by each corresponding principal component.
To perform the dimensionality reduction, you can either take U and multiply by S or take your data that is mean subtracted and multiply by V. In other words, supposing X is the matrix M where each column has its mean computed and the is subtracted from each column of M, the following relationship holds:
US = XV
To actually perform the final dimensionality reduction, you take either US or XV and retain the first k columns where k is the total amount of dimensions you want to retain. The value of k depends on your application, but many people choose k to be the total number of principal components that explains a certain percentage of your variability in your data.
For more information about the link between SVD and PCA, please see this post on Cross Validated: https://stats.stackexchange.com/q/134282/86678
Instead of [U, S, V] = svd(M), which tries to build a matrix U that is 49152 by 49152 (= 18 GB 😱!), do svd(M, 'econ'). That returns the “economy-class” SVD, where U will be 52 by 52, S is 52 by 52, and V is also 52 by 52.
cov(M) will remove each dimension’s mean and evaluate the inner product, giving you a 52 by 52 covariance matrix. You can implement your own version of cov, called mycov, as
function [C] = mycov(M)
M = bsxfun(#minus, M, mean(M, 1)); % subtract each dimension’s mean over all observations
C = M' * M / size(M, 1);
(You can verify this works by looking at mycov(randn(49152, 52)), which should be close to eye(52), since each element of that array is IID-Gaussian.)
There’s a lot of magical linear algebraic properties and relationships between the SVD and EVD (i.e., singular value vs eigenvalue decompositions): because the covariance matrix cov(M) is a Hermitian matrix, it’s left- and right-singular vectors are the same, and in fact also cov(M)’s eigenvectors. Furthermore, cov(M)’s singular values are also its eigenvalues: so svd(cov(M)) is just an expensive way to get eig(cov(M)) 😂, up to ±1 and reordering.
As #rayryeng explains at length, usually people look at svd(M, 'econ') because they want eig(cov(M)) without needing to evaluate cov(M), because you never want to compute cov(M): it’s numerically unstable. I recently wrote an answer that showed, in Python, how to compute eig(cov(M)) using svd(M2, 'econ'), where M2 is the 0-mean version of M, used in the practical application of color-to-grayscale mapping, which might help you get more context.

Understanding extrinsic stereoParameters (rotation) on Matlab stereoCameraCalibrator

Consider the fallowing stereo camera system calibration parameters using matlab stereoCameraCalibrator app.
R1 = stereoParams.CameraParameters1.RotationMatrices(:,:,N);
R2 = stereoParams.CameraParameters2.RotationMatrices(:,:,N);
R12 = stereoParams.RotationOfCamera2;
Were:
R1: rotation from world coordinates (for image N) to camera 1.
R2: rotation from world coordinates (for image N) to camera 2.
R12: rotation from camera 1 coordinates to camera 2. As described on a related SO question
If that is correct, shouldn't R12*R1 == R2 ?
But I'm getting different values, so, what I'm missing here?
Edit
Well, it seams all matrices are transposed. So: R12'*R1' == R2' !
Why they are transposed?
The reason why they are transposed is due to the fact that when performing geometric transformations between coordinates, MATLAB uses row vectors to perform the transformation whereas column vectors are traditionally used in practice.
In other words, to transform a coordinate from one point to another, you typically perform:
x' = A*x
A would be the transformation matrix and x is a column vector of coordinates. The output x' would be another column vector of coordinates. MATLAB in fact uses a row vector and so if you want to achieve the same effect in multiplication, you must transpose the matrix A (i.e. A^{T}) and pre-multiply by A instead of post-multiplying:
x' = x*A^{T}
Here x would be a row vector and to ensure that weighted combination of rows and columns is correctly accumulated, you must transpose A to maintain the same calculations. However, the shape of the output x' would be a row vector instead.
This can also be verified by transposing the product of two matrices. Specifically, if x' = A*x, then in order to transform the output into a row vector x'^{T}, we must transpose the matrix-vector product:
x'^{T} = (A*x)^{T} = x^{T}*A^{T}
The last statement is a natural property of transposing the product of two matrices. See point 3 at the Transpose Wikipedia article for more details: https://en.wikipedia.org/wiki/Transpose#Properties
The reason why the transpose is performed ultimately stems back to the way MATLAB handles how numbers are aligned in memory. MATLAB is a column-major based language which means that numbers are populated in a matrix column-wise. Therefore, if you were to populate a matrix one element at a time, this would be done in a column-wise fashion and so the coefficients are populated per column instead of per row as we are normally used to, ultimately leading to what we've concluded above.
Therefore, when you have transposed both R12 and R1, this brings back the representation into a row major setting where these matrices were originally column major for ease of MATLAB use. The row major setting thus allows you to use coordinates that are column vectors to facilitate the transformation. This column vector setting is what we are used to. Therefore, multiplying R12 and R1 after you transpose them both brings you to the correct transformation matrix R2 in the standard row major representation.

Matlab: How to convert a matrix into a Toeplitz matrix

Considering a discrete dynamical system where x[0]=rand() denotes the initial condition of the system.
I have generated an m by n matrix by the following step -- generate m vectors with m different initial conditions each with dimension N (N indicates the number of samples or elements). This matrix is called R. Using R how do I create a Toeplitz matrix, T? T
Mathematically,
R = [ x_0[0], ....,x_0[n-1];
..., ,.....;
x_m[0],.....,x_m[n-1]]
The toeplitz matrix T =
x[n-1], x[n-2],....,x[0];
x[0], x[n-1],....,x[1];
: : :
x[m-2],x[m-3]....,x[m-1]
I tried working with toeplitz(R) but the dimension changes. The dimension should no change, as seen mathematically.
According to the paper provided (Toeplitz structured chaotic sensing matrix for compressive sensing by Yu et al.) there are two Chaotic Sensing Matrices involved. Let's explore them separately.
The Chaotic Sensing Matrix (Section A)
It is clearly stated that to create such matrix you have to build m independent signals (sequences) with m different initials conditions (in range ]0;1[) and then concatenate such signals per rows (that is, one signal = one row). Each of these signals must have length N. This actually is your matrix R, which is correctly evaluated as it is. Although I'd like to suggest a code improvement: instead of building a column and then transpose the matrix you can directly build such matrix per rows:
R=zeros(m,N);
R(:,1)=rand(m,1); %build the first column with m initial conditions
Please note: by running randn() you select values with Gaussian (Normal) distribution, such values might not be in range ]0;1[ as stated in the paper (right below equation 9). As instead by using rand() you take uniformly distributed values in such range.
After that, you can build every row separately according to the for-loop:
for i=1:m
for j=2:N %skip first column
R(i,j)=4*R(i,j-1)*(1-R(i,j-1));
R(i,j)=R(i,j)-0.5;
end
end
The Toeplitz Chaotic Sensing Matrix (Section B)
It is clearly stated at the beginning of Section B that to build the Toeplitz matrix you should consider a single sequence x with a given, single, initial condition. So let's build such sequence:
x=rand();
for j=2:N %skip first element
x(j)=4*x(j-1)*(1-x(j-1));
x(j)=x(j)-0.5;
end
Now, to build the matrix you can consider:
how do the first row looks like? Well, it looks like the sequence itself, but flipped (i.e. instead of going from 0 to n-1, it goes from n-1 to 0)
how do the first column looks like? It is the last item from x concatenated with the elements in range 0 to m-2
Let's then build the first row (r) and the first column (c):
r=fliplr(x);
c=[x(end) x(1:m-1)];
Please note: in Matlab the indices start from 1, not from 0 (so instead of going from 0 to m-2, we go from 1 to m-1). Also end means the last element from a given array.
Now by looking at the help for the toeplitz() function, it is clearly stated that you can build a non-squared Toeplitz matrix by specifying the first row and the first column. Therefore, finally, you can build such matrix as:
T=toeplitz(c,r);
Such matrix will indeed have dimensions m*N, as reported in the paper.
Even though the Authors call both of them \Phi, they actually are two separate matrices.
They do not take the Toeplitz of the Beta-Like Matrix (Toeplitz matrix is not a function or operator of some kind), neither do they transform the Beta-Like Matrix into a Toeplitz-matrix.
You have the Beta-Like Matrix (i.e. the Chaotic Sensing Matrix) at first, and then the Toeplitz-structured Chaotic Sensing Matrix: such structure is typical for Toeplitz matrices, that is a diagonal-constant structure (all elements along a diagonal have the same value).

Finding proportional columns in matrix

I have a big matrix (1,000 rows and 50,000 columns). I know some columns are correlated (the rank is only 100) and I suspect some columns are even proportional. How can I find such proportional columns? (one way would be looping corr(M(:,j),M(:,k))), but is there anything more efficient?
If I am understanding your problem correctly, you wish to determine those columns in your matrix that are linearly dependent, which means that one column is proportional or a scalar multiple of another. There's a very basic algorithm based on QR Decomposition. For QR decomposition, you can take any matrix and decompose it into a product of two matrices: Q and R. In other words:
A = Q*R
Q is an orthogonal matrix with each column as being a unit vector, such that multiplying Q by its transpose gives you the identity matrix (Q^{T}*Q = I). R is a right-triangular or upper-triangular matrix. One very useful theory by Golub and Van Loan in their 1996 book: Matrix Computations is that a matrix is considered full rank if all of the values of diagonal elements of R are non-zero. Because of the floating point precision on computers, we will have to threshold and check for any values in the diagonal of R that are greater than this tolerance. If it is, then this corresponding column is an independent column. We can simply find the absolute value of all of the diagonals, then check to see if they're greater than some tolerance.
We can slightly modify this so that we would search for values that are less than the tolerance which would mean that the column is not independent. The way you would call up the QR factorization is in this way:
[Q,R] = qr(A, 0);
Q and R are what I just talked about, and you specify the matrix A as input. The second parameter 0 stands for producing an economy-size version of Q and R, where if this matrix was rectangular (like in your case), this would return a square matrix where the dimensions are the largest of the two sizes. In other words, if I had a matrix like 5 x 8, producing an economy-size matrix will give you an output of 5 x 8, where as not specifying the 0 will give you an 8 x 8 matrix.
Now, what we actually need is this style of invocation:
[Q,R,E] = qr(A, 0);
In this case, E would be a permutation vector, such that:
A(:,E) = Q*R;
The reason why this is useful is because it orders the columns of Q and R in such a way that the first column of the re-ordered version is the most probable column that is independent, followed by those columns in decreasing order of "strength". As such, E would tell you how likely each column is linearly independent and those "strengths" are in decreasing order. This "strength" is exactly captured in the diagonals of R corresponding to this re-ordering. In fact, the strength is proportional to this first element. What you should do is check to see what diagonals of R in the re-arranged version are greater than this first coefficient scaled by the tolerance and you use these to determine which of the corresponding columns are linearly independent.
However, I'm going to flip this around and determine the point in the R diagonals where the last possible independent columns are located. Anything after this point would be considered linearly dependent. This is essentially the same as checking to see if any diagonals are less than the threshold, but we are using the re-ordering of the matrix to our advantage.
In any case, putting what I have mentioned in code, this is what you should do, assuming your matrix is stored in A:
%// Step #1 - Define tolerance
tol = 1e-10;
%// Step #2 - Do QR Factorization
[Q, R, E] = qr(A,0);
diag_R = abs(diag(R)); %// Extract diagonals of R
%// Step #3 -
%// Find the LAST column in the re-arranged result that
%// satisfies the linearly independent property
r = find(diag_R >= tol*diag_R(1), 1, 'last');
%// Step #4
%// Anything after r means that the columns are
%// linearly dependent, so let's output those columns to the
%// user
idx = sort(E(r+1:end));
Note that E will be a permutation vector, and I'm assuming you want these to be sorted so that's why we sort them after the point where the vectors fail to become linearly independent anymore. Let's test out this theory. Suppose I have this matrix:
A =
1 1 2 0
2 2 4 9
3 3 6 7
4 4 8 3
You can see that the first two columns are the same, and the third column is a multiple of the first or second. You would just have to multiply either one by 2 to get the result. If we run through the above code, this is what I get:
idx =
1 2
If you also take a look at E, this is what I get:
E =
4 3 2 1
This means that column 4 was the "best" linearly independent column, followed by column 3. Because we returned [1,2] as the linearly dependent columns, this means that columns 1 and 2 that both have [1,2,3,4] as their columns are a scalar multiple of some other column. In this case, this would be column 3 as columns 1 and 2 are half of column 3.
Hope this helps!
Alternative Method
If you don't want to do any QR factorization, then I can suggest reducing your matrix into its row-reduced Echelon form, and you can determine the basis vectors that make up the column space of your matrix A. Essentially, the column space gives you the minimum set of columns that can generate all possible linear combinations of output vectors if you were to apply this matrix using matrix-vector multiplication. You can determine which columns form the column space by using the rref command. You would provide a second output to rref that gives you a vector of elements that tell you which columns are linearly independent or form a basis of the column space for that matrix. As such:
[B,RB] = rref(A);
RB would give you the locations of which columns for the column space and B would be the row-reduced echelon form of the matrix A. Because you want to find those columns that are linearly dependent, you would want to return a set of elements that don't contain these locations. As such, define a linearly increasing vector from 1 to as many columns as you have, then use RB to remove these entries in this vector and the result would be those linearly dependent columns you are seeking. In other words:
[B,RB] = rref(A);
idx = 1 : size(A,2);
idx(RB) = [];
By using the above code, this is what we get:
idx =
2 3
Bear in mind that we identified columns 2 and 3 to be linearly dependent, which makes sense as both are multiples of column 1. The identification of which columns are linearly dependent are different in comparison to the QR factorization method, as QR orders the columns based on how likely that particular column is linearly independent. Because columns 1 to 3 are related to each other, it shouldn't matter which column you return. One of these forms the basis of the other two.
I haven't tested the efficiency of using rref in comparison to the QR method. I suspect that rref does Gaussian row eliminations, where the complexity is worse compared to doing QR factorization as that algorithm is highly optimized and efficient. Because your matrix is rather large, I would stick to the QR method, but use rref anyway and see what you get!
If you normalize each column by dividing by its maximum, proportionality becomes equality. This makes the problem easier.
Now, to test for equality you can use a single (outer) loop over columns; the inner loop is easily vectorized with bsxfun. For greater speed, compare each column only with the columns to its right.
Also to save some time, the result matrix is preallocated to an approximate size (you should set that). If the approximate size is wrong, the only penalty will be a little slower speed, but the code works.
As usual, tests for equality between floating-point values should include a tolerance.
The result is given as a 2-column matrix (S), where each row contains the indices of two rows that are proportional.
A = [1 5 2 6 3 1
2 5 4 7 6 1
3 5 6 8 9 1]; %// example data matrix
tol = 1e-6; %// relative tolerance
A = bsxfun(#rdivide, A, max(A,[],1)); %// normalize A
C = size(A,2);
S = NaN(round(C^1.5),2); %// preallocate result to *approximate* size
used = 0; %// number of rows of S already used
for c = 1:C
ind = c+find(all(abs(bsxfun(#rdivide, A(:,c), A(:,c+1:end))-1)<tol));
u = numel(ind); %// number of columns proportional to column c
S(used+1:used+u,1) = c; %// fill in result
S(used+1:used+u,2) = ind; %// fill in result
used = used + u; %// update number of results
end
S = S(1:used,:); %// remove unused rows of S
In this example, the result is
S =
1 3
1 5
2 6
3 5
meaning column 1 is proportional to column 3; column 1 is proportional to column 5 etc.
If the determinant of a matrix is zero, then the columns are proportional.
There are 50,000 Columns, or 2 times 25,000 Columns.
It is easiest to solve the determinant of a 2 by 2 matrix.
Hence: to Find the proportional matrix, the longest time solution is to
define the big-matrix on a spreadsheet.
Apply the determinant formula to a square beginning from 1st square on the left.
Copy it for every row & column to arrive at the answer in the next spreadsheet.
Find the Columns with Determinants Zero.
This is Quite basic,not very time consuming and should be result oriented.
Manual or Excel SpreadSheet(Efficient)