What is the more generalized term?
Why is MATLAB named matrix laboratory, then?
A matrix is a practical way to represent a linear transformation from a space of dimension n to a space of dimension m in the form of a nxm array of scalar values.
It is also very practical to perform linear algebra operation in a very systematic way that can be implemented on a computer. For instance if matrix A represents the linear transformation f and matrix B the linear transformation g, then the composition f o g writes as A*B where * denotes matrix multiplication. Matlab has also a lot of routines related to matrix operations (i.e. linear algebra operations) like det, pinv, svd etc...
As you can still see nowadays in Matlab, operators like *, / are strongly tied to matrix operations and thus strongly tied to linear algebra operations, which I think was the original goal of matlab in its early elaboration, hence its name (surely quite speculative but guess not so far from reality).
To perform element-wise operations on n-dimensional data sets, you have to write .*, or ./. denoting you are now performing array operations.
I would not say array operations encompass matrix operations, they are different. The later ones relate to linear algebra, while the other ones just relate to a practical way to operate on large sets of data. These data are not limited to be numbers, they are just n-dimensional data sets of whatever (string, numbers, cells, etc...).
Matlab also has a very synthetic syntax to perform array operations on sub-blocks (i.e. linear/logical subscripts) that makes it very easy to reorganize data sets in just one line of code before applying subsequent matrix or array operations.
If you're asking about MATLAB, the word "matrix" typically refers to a 2d array, whereas an "array" can be n-dimensional.
Early versions of MATLAB supported only 2d matrices, not n-dimensional arrays. I believe support for n-dimensional arrays was introduced in version 5 of MATLAB.
I would say that MATLABs matrix is a more advanced kind of array if you compare to the c-style arrays, eg double array[], or the Java array, eg double arry2[]. I would also say that the matlab matrix is better for mathematical purposed than the c++ vector or Java ArrayList. However, if you mean the matlab array I would say that it is more complicated. I would then recommend the link about matlab data which describes the mxArray type, used to store most of the data in matlab. The question is hard to answer completely without better description of what you mean with array, but I would say that regarding the type there is no difference between an array like a = [1,2,3,4] and matrix like b = [1,2,3,4;5,6,7,8]. There can also be matrices of higher dimensions as c = ones(3,4,3). These are in general called matrices as well in MATLAB, or if you need to be more specific N dimensional matrices.
Related
Swift's library "Accelerate" has sparse matrix types and several classes of functions for sparse matrix multiplication with different argument types, and BLAS-like functions with sparse matrices and vectors.
Interestingly, there are no functions that produce a sparse vector from a sparse matrix dot product with a sparse vector. (Or at least I did not see any in
Accelerate's documentation
.)
It looks like the workflow using
SparseVector_double for d = S . v
could be:
Convert the sparse vector v into a dense vector (or matrix)
Use the function SparseMultiply
Make the dense result d sparse
Alternative workflows with the BLAS functions are possible, say, using the function
sparse_matrix_product_sparse_double, but, again, the result is dense and has to be converted into a sparse vector/matrix.
I have several questions:
Is my conjecture that there is no direct way of getting a sparse vector from a dot product correct?
What is the fastest and easiest way to convert the dense vector/matrix results from the dot product functions into a sparse vector/matrix?
I should just scan the 0's with a loop, or there are relevant library functions?
What are the reasons none of these functions produce sparse structures?
I have the following problem. I have a N x N real matrix called Z(x; t), where x and t might be vectors in general. I have N_s observations (x_k, Z_k), k=1,..., N_s and I'd like to find the vector of parameters t that better approximates the data in the least square sense, which means I want t that minimizes
S(t) = \sum_{k=1}^{N_s} \sum_{i=1}^{N} \sum_{j=1}^N (Z_{k, i j} - Z(x_k; t))^2
This is in general a non-linear fitting of a matrix function. I'm only finding examples in which one has to fit scalar functions which are not immediately generalizable to a matrix function (nor a vector function). I tried using the scipy.optimize.leastsq function, the package symfit and lmfit, but still I don't manage to find a solution. Eventually, I'm ending up writing my own code...any help is appreciated!
You can do curve-fitting with multi-dimensional data. As far as I am aware, none of the low-level algorithms explicitly support multidimensional data, but they do minimize a one-dimensional array in the least-squares sense. And the fitting methods do not really care about the "independent variable(s)" x except in that they help you calculate the array to be minimized - perhaps to calculate a model function to match to y data.
That is to say: if you can write a function that would take the parameter values and calculate the matrix to be minimized, just flatten that 2-d (on n-d) array to one dimension. The fit will not mind.
I am looking for 2 things: a way to define a matrix in Swift and a way to diagonlize said matrix.
So far, I've found a way to make something that resembles a matrix using this:
var NumColumns = 2
var NumRows = 4
var array = Array<Array<Double>>()
for column in 0...NumColumns {
array.append(Array(repeating:Double(), count:NumRows))
}
print(array)
But someone told me that this will not do because after I have the matrix I will need to use a diagonalization algorithm specifically on a matrix and not something similar to a matrix.
Any ideas?
The common way of defining matrices in different languages (including Swift) is row-major order. So, essentially your matrix is stored as contiguous array of rows. This will allow you to do most of linear algebra operations efficiently using Apple Accelerate framework. For your particular case, Apple has something already, however it won't be nicely documented neither will the API be "good-looking". Apple provides Swift bindings to LAPACK (Fortran based linear algebra library). To diagonalise your matrix you will need to find eigenvectors for it using dsyevd_ routine (http://www.netlib.org/lapack/explore-html/d2/d8a/group__double_s_yeigen_ga694ddc6e5527b6223748e3462013d867.html#ga694ddc6e5527b6223748e3462013d867). As an output of the function you will receive a matrix (represented in column-major order) of eigenvectors and a vector of eigenvalues (an array). If you transpose the matrix using another API function vDSP_mtransD (https://developer.apple.com/reference/accelerate/1450422-vdsp_mtransd) and create a diagonal matrix from eigenvalues, you will get matrices V and D for which the equation A = VDV' is satisfied. This is, as far as I understand, exactly what you're looking for.
I am from C/C++ programming world and finding it difficult to understand what exactly is a Vector / Matrix in MATLAB - why are the not termed as array everywhere.
What is Vector in MATLAB and why it is not called or referenced as an array?
The "MAT" in MATLAB is for Matrix, not Math. In MATLAB, basically everything you do is calculations with what you would call matrices / vectors in mathematical terms.
It is common to call a numeric array a matrix (or vector if it's 1xn), and other arrays for arrays. You'll see terms like cell array, which is an array of cells.
This way you can use mathematical terms when describing calculations with numerical arrays. For instance inv can be used to find the inverse of a matrix, instead of the inverse of a numeric array. (Btw, never use inv, it was just an example).
Matlab is designed to use as "Matrix-Lab": a tool for numerically process linear-algebra objects such as vector and matrices. So, in terms of "data structures" it indeed works with n-dimensional arrays, but has special names for the special cases: "vector" for 1-d array and "matrix" for 2-d array.
There are two alternative methods to compute DCT and its inverse in MATLAB. One is dct2/idct2 and the other is the transformation matrix computed by dctmtx. Why is there an alternative way based on matrix multiplications making use of dctmtx?
"If A is square, the two-dimensional DCT of A can be computed as D*A*D'. This computation is sometimes faster than using dct2, especially if you are computing a large number of small DCTs, because D needs to be determined only once."
Where D = dctmtx(n)
Source: http://www.mathworks.com/help/toolbox/images/ref/dctmtx.html