MuPad: Cannot work with symbolic expansion when handling matricial expressions - matlab

I am working MuPad in order to have a symbolic tool to find solution for an equation. But I am working with matrices.
Consider this:
blck := A -> matrix([
[A[1..linalg::matdim(A)[1]/2,1..linalg::matdim(a)[2]/2],
A[1..linalg::matdim(A)[1]/2,linalg::matdim(A)[2]/2+1..linalg::matdim(A)[2]]],
[A[linalg::matdim(A)[1]/2+1..linalg::matdim(A)[1],1..linalg::matdim(A)[2]/2],
A[linalg::matdim(A)[1]/2+1..linalg::matdim(A)[1],linalg::matdim(A)[2]/2+1..linalg::matdim(A)[2]]]
])
This function enables me to have a block representation of a matrix and it works. Now consider this function
myfun := A -> matrix([[blck(A)[1,1]*blck(A)[2,2]*blck(A)[2,1],blck(A)[1,1]],
[blck(A)[1,1],blck(A)[1,1]]])
This will manipulate a little a matrix and returns matrix whose components are combined somehow. The problem is that, considering that I cannot tell MuPad that matrix A and its components are matrices and not reals, it happens that MuPad will show me matrix products in different order
For example. Consider
myfun(matrix([[A11,A12],[A21,A22]]))
The first component of the returned matrix, element (1,1), is A11*A21*A22 which is incorrect being A11,A12,A21,A22 matrices!
How can i tell MuPad that A11,A12,A21 and A22 are matrices so that MuPad will expand products correctly?

You can have matrices in matrices in MuPAD, as long as you explicitly put them in there. Just telling the system to treat A1*A2 as non-commutative is more difficult and not well supported. You could go full-blown, create your own datatype and implement arithmetic accordingly, but that's not necessarily easy if you still want simplifications to happen.

Related

numerical problem after iterations in Matlab

I encountered some numerical questions when running simulation on MatLab. Here please find the questions:
I found that A*A' (a matrix times its transpose) is not guaranteed to be symmetric in MatLab. Can I know what is the reason? And because I will have A*C*A', where C is a symmetric matrix, and I would like to keep A*C*A' as symmetric. Is there any method to fix the numerical difference created by the transpose operation?
I implemented a for loop in Matlab to compute a set of matrices. Small numerical difference (around 10^(-10)) in each round accumulates to the next run, and it finally diverges after around 30 rounds. Is there any method to fix small error in each run and do not affect the result at the same time.
Thank you for reading my questions!
"I found that A*A' (a matrix times its transpose) is not guaranteed to be symmetric in MatLab."
I would dispute that statement as written. The MATLAB parser is smart enough to recognize that the operands of A*A' are the same and call a symmetric BLAS routine in the background to do the work, and then manually copy one triangle into the other resulting in an exactly symmetric result. Where one usually gets into trouble is by coding something that the parser cannot recognize. E.g.,
A = whatever;
B = whatever;
X = A + B;
(A+B) * (A+B)' <-- MATLAB parser will call generic BLAS routine
X * X' <-- MATLAB parser will call symmetric BLAS routine
In the first matrix multiply above, the MATLAB parser may not be not smart enough to recognize the symmetry so a generic matrix multiply BLAS routine (e.g., dgemm) could be called to do the work and the result is not guaranteed to be exactly symmetric. But in the second matrix multiply above the MATLAB parser does recognize the symmetry and calls a symmetric BLAS matrix multiply routine.
For the ACA' case, I don't know of any method to force MATLAB to generate an exact symmetric result. You could manually copy one resulting triangle into the other after the fact. I suppose you could also factor C into two parts X*X' and then regroup but that seems like too much work for what you are trying to do.

Nonlinear curve fitting of a matrix function in python

I have the following problem. I have a N x N real matrix called Z(x; t), where x and t might be vectors in general. I have N_s observations (x_k, Z_k), k=1,..., N_s and I'd like to find the vector of parameters t that better approximates the data in the least square sense, which means I want t that minimizes
S(t) = \sum_{k=1}^{N_s} \sum_{i=1}^{N} \sum_{j=1}^N (Z_{k, i j} - Z(x_k; t))^2
This is in general a non-linear fitting of a matrix function. I'm only finding examples in which one has to fit scalar functions which are not immediately generalizable to a matrix function (nor a vector function). I tried using the scipy.optimize.leastsq function, the package symfit and lmfit, but still I don't manage to find a solution. Eventually, I'm ending up writing my own code...any help is appreciated!
You can do curve-fitting with multi-dimensional data. As far as I am aware, none of the low-level algorithms explicitly support multidimensional data, but they do minimize a one-dimensional array in the least-squares sense. And the fitting methods do not really care about the "independent variable(s)" x except in that they help you calculate the array to be minimized - perhaps to calculate a model function to match to y data.
That is to say: if you can write a function that would take the parameter values and calculate the matrix to be minimized, just flatten that 2-d (on n-d) array to one dimension. The fit will not mind.

Simulink 3D lookup table

I have a system of three nonlinear equations with eight unknowns. I'm currently setting each equation equal to a desired value and then using Matlab's fsolve (a numerical solver) to find a solution. Instead of running fsolve in real-time, I'd like to pre-compute solutions for a specific set of values to which I set the equations equal.
Pursuant that goal, I've run the solver over a set of values and created a 3D matrix (N x N x N) which I've attempted to load into eight Simulink 3-D lookup tables, Direct Lookup Table n-D block, so I can fetch each of the eight solved unknowns. It's my understanding the inputs to this block should work the same way I would reference an element in my 3-D array: table(x,y,z) but I'm constantly getting Simulink table input out-of-range errors. I've confirmed the inputs are within the table size, so I'm not sure what's wrong.
This isn't the most elegant implementation, so I'm open to better solutions. Ideally, I'd like to have a Simulink lookup that takes three inputs and returns a vector of the eight solved unknowns, or even better, can do some type of linear interpolation between the three lookup values to return an approximate solution.
Thanks!

Summing a series in matlab

I'm trying to write a generic function for finding the cosine of a value inputted into the function. The formula for cosine that I'm using is:
n
cosx = sum((-1)^n*x^(2n)/(2n)!)
n=1
I've looked at the matlab documentation and this page implies that the "sum" function should be able to do it so I tried to test it by entering:
sum(x^n, n=1..3)
but it just gives me "Error: The expression to the left of the equals sign is not a valid target for an assignment".
Is summing an infinite series something that matlab is able to do by default or do I have to simulate it using a function and loops?
Well if you want to approximate it to a finite number of terms you can do it in Matlab without toolboxes or loops:
sumCos = #(x, n)(sum(((-1).^(0:n)).*(x.^(2*(0:n)))./(factorial(2*(0:n)))));
and then use it like this
sumCos(pi, 30)
The first parameter is the angle, the second is the number of terms you want to take the series to (i.e. effects the precision). This is a numerical solution which I think is really what you're after.
btw I took the liberty of correcting your initial sum, surely n must start from 0 if you are trying to approximate cos
If you want to understand my formula (which surely you do) then you need to read up on some essential Matlab basics namely the colon operator and then the concept of using . to perform element-wise operations.
In MATLAB itself, no, you cannot solve an infinite sum. You would have to estimate it as you suggested. The page you were looking at is part of the Symbolic Math toolbox, which is an add-on to MATLAB. In particular, you were looking at MuPAD, which is rather similar to Mathematica. It is a symbolic math workspace, whereas MATLAB is more of a numeric math workspace. If you own the Symbolic Math toolbox, you can either use MuPAD as you tried to above, or you can use the symsum function from within MATLAB itself to carry out sums of series.

Minimizing error of a formula in MATLAB (Least squares?)

I'm not too familiar with MATLAB or computational mathematics so I was wondering how I might solve an equation involving the sum of squares, where each term involves two vectors- one known and one unknown. This formula is supposed to represent the error and I need to minimize the error. I think I'm supposed to use least squares but I don't know too much about it and I'm wondering what function is best for doing that and what arguments would represent my equation. My teacher also mentioned something about taking derivatives and he formed a matrix using derivatives which confused me even more- am I required to take derivatives?
The problem that you must be trying to solve is
Min u'u = min \sum_i u_i^2, u=y-Xbeta, where u is the error, y is the vector of dependent variables you are trying to explain, X is a matrix of independent variables and beta is the vector you want to estimate.
Since sum u_i^2 is diferentiable (and convex), you can evaluate the minimal of this expression calculating its derivative and making it equal to zero.
If you do that, you find that beta=inv(X'X)X'y. This maybe calculated using the matlab function regress http://www.mathworks.com/help/stats/regress.html or writing this formula in Matlab. However, you should be careful how to evaluate the inverse (X'X) see Most efficient matrix inversion in MATLAB