There are two alternative methods to compute DCT and its inverse in MATLAB. One is dct2/idct2 and the other is the transformation matrix computed by dctmtx. Why is there an alternative way based on matrix multiplications making use of dctmtx?
"If A is square, the two-dimensional DCT of A can be computed as D*A*D'. This computation is sometimes faster than using dct2, especially if you are computing a large number of small DCTs, because D needs to be determined only once."
Where D = dctmtx(n)
Source: http://www.mathworks.com/help/toolbox/images/ref/dctmtx.html
Related
Consider A an nxn matrix. its not a special matrix and in the worst case all of its entries are non-zero. I am looking for a way to compute AA^T using matrix-vector operation. The total number of flops are (2n-1)*(n(n+1))/2 because all I have to do for a symmetric matrix like C=AA^T is to compute the diagonal entires which are C(i,i)=A(i,:)^T * A(i,:). What I want now is to compute the lower triangular part and then when I am done I just say that the upper triangular part is the same as the lower triangular part. The problem is can I do it in matrix-vector multiplication or will it force me to perform unnecessary multiplication (like multiplying elements in the upper part)? It is clear that a scalar computation would work but I am interested to know if a matrix-vector computation would work or not.
MATLAB is already smart enough to do this for you. When MATLAB encounters an expression like A*A.', it will recognize that the two operands are the same and will call a symmetric BLAS library function in the background to do the calculation. This symmetric function does exactly what you want ... only does about 1/2 the operations and generates an exact symmetric result.
I have the following problem. I have a N x N real matrix called Z(x; t), where x and t might be vectors in general. I have N_s observations (x_k, Z_k), k=1,..., N_s and I'd like to find the vector of parameters t that better approximates the data in the least square sense, which means I want t that minimizes
S(t) = \sum_{k=1}^{N_s} \sum_{i=1}^{N} \sum_{j=1}^N (Z_{k, i j} - Z(x_k; t))^2
This is in general a non-linear fitting of a matrix function. I'm only finding examples in which one has to fit scalar functions which are not immediately generalizable to a matrix function (nor a vector function). I tried using the scipy.optimize.leastsq function, the package symfit and lmfit, but still I don't manage to find a solution. Eventually, I'm ending up writing my own code...any help is appreciated!
You can do curve-fitting with multi-dimensional data. As far as I am aware, none of the low-level algorithms explicitly support multidimensional data, but they do minimize a one-dimensional array in the least-squares sense. And the fitting methods do not really care about the "independent variable(s)" x except in that they help you calculate the array to be minimized - perhaps to calculate a model function to match to y data.
That is to say: if you can write a function that would take the parameter values and calculate the matrix to be minimized, just flatten that 2-d (on n-d) array to one dimension. The fit will not mind.
What is the more generalized term?
Why is MATLAB named matrix laboratory, then?
A matrix is a practical way to represent a linear transformation from a space of dimension n to a space of dimension m in the form of a nxm array of scalar values.
It is also very practical to perform linear algebra operation in a very systematic way that can be implemented on a computer. For instance if matrix A represents the linear transformation f and matrix B the linear transformation g, then the composition f o g writes as A*B where * denotes matrix multiplication. Matlab has also a lot of routines related to matrix operations (i.e. linear algebra operations) like det, pinv, svd etc...
As you can still see nowadays in Matlab, operators like *, / are strongly tied to matrix operations and thus strongly tied to linear algebra operations, which I think was the original goal of matlab in its early elaboration, hence its name (surely quite speculative but guess not so far from reality).
To perform element-wise operations on n-dimensional data sets, you have to write .*, or ./. denoting you are now performing array operations.
I would not say array operations encompass matrix operations, they are different. The later ones relate to linear algebra, while the other ones just relate to a practical way to operate on large sets of data. These data are not limited to be numbers, they are just n-dimensional data sets of whatever (string, numbers, cells, etc...).
Matlab also has a very synthetic syntax to perform array operations on sub-blocks (i.e. linear/logical subscripts) that makes it very easy to reorganize data sets in just one line of code before applying subsequent matrix or array operations.
If you're asking about MATLAB, the word "matrix" typically refers to a 2d array, whereas an "array" can be n-dimensional.
Early versions of MATLAB supported only 2d matrices, not n-dimensional arrays. I believe support for n-dimensional arrays was introduced in version 5 of MATLAB.
I would say that MATLABs matrix is a more advanced kind of array if you compare to the c-style arrays, eg double array[], or the Java array, eg double arry2[]. I would also say that the matlab matrix is better for mathematical purposed than the c++ vector or Java ArrayList. However, if you mean the matlab array I would say that it is more complicated. I would then recommend the link about matlab data which describes the mxArray type, used to store most of the data in matlab. The question is hard to answer completely without better description of what you mean with array, but I would say that regarding the type there is no difference between an array like a = [1,2,3,4] and matrix like b = [1,2,3,4;5,6,7,8]. There can also be matrices of higher dimensions as c = ones(3,4,3). These are in general called matrices as well in MATLAB, or if you need to be more specific N dimensional matrices.
I need to pre-compute the histogram intersection kernel matrices for using LIBSVM in MATLAB.
Assume x, y are two vectors. The kernel function is K(x, y) = sum(min(x, y)). In order to be efficient, the best practice in most cases is to vectorize the operations.
What I want to do is like calculate the kernel matrices like calculating the euclidean distance between two matrices, like pdist2(A, B, 'euclidean'). After defining function 'intKernel', I could calculate the intersection kernel by calling pdist2(A, B, intKernel).
I know the function 'pdist2' may be an option. But I have no idea how to write the self-defined distance function. While, I do not know how to code the intersection kernel between vector(1-by-M) and matrix(M-by-N) in one condense expression.
'repmat' may not be feasible, because the matrix is really large, let us say, 20000-by-360000.
Any help would be appreciated.
Regards,
Peiyun
I think pdist2 is a good option, so I help you to define your distance function.
According to the doc, the self-defined distance function must have 2 inputs: first one is a 1-by-N vector; second one is a M-by-N matrix (be careful of the order!).
To avoid the use of repmat which is indeed memory-consumant, you can use bsxfun to apply some basic operations on data with expansion over singleton dimensions. In your case, you can do the following thing:
distance_kernel = #(x,Y) sum(bsxfun(#min,x,Y),2);
Summation is done over the columns to get a column vector as output.
Then just call pdist2 and you are done.
I currently have a large matrix M (~100x100x50 elements) containing both positive and negative values. At the moment, if I want to smooth this matrix, I use the smooth3 function to apply a gaussian kernel over the entire 3-D matrix.
What I want to achieve is a variable level of smoothing within this matrix - i.e.. different parts of the matrix M are smoothed to different levels of sigma depending of the value in a similar 3-D matrix, d (with values ranging from 0 to 1). Where d is 0, no smoothing occurs, where d is 1 a maximum level of smoothing occurs.
The fact that the matrix is 3-D is trivial. Smoothing in 3 dimensions is nice, but not essential, and my current code (performing various other manipulations) handles each of the 50 slices of M separately anyway. I am happy to replace smooth3 with a convolution of M with a gaussian function, and perform this convolution over each slice individually. What I can't figure out is how to vary the sigma level of this gaussian function (based on d) given its location in M and output the result accordingly.
An alternative approach may be to use matrix d as a mask for a very smooth version of matrix Ms and somehow manipulate M and Ms to give an equivalent result, however I'm not convinced that this will work as I can't think of a function to combine M and Md that won't give artefacts of each of M or Ms when 0 < d < 1...any thoughts?
[I'm using 2009b, and only have access to the Signal Processing toolbox.]
You should have a look at the Guided Image Filter. It is a computationally efficient generalization of the bilateral filter.
http://research.microsoft.com/en-us/um/people/jiansun/papers/guidedfilter_eccv10.pdf
It will allow you to do proper smoothing based on your guidance matrix.