Here I've a transfer function matrix with size of (3*7):
G = [G11,G12,G13,G14,G15,G16,G17;
G21,G22,G23,G24,G25,G26,G27;
G31,G32,G33,G34,G35,G36,G37]
Is it possible to get A = [G*(G^(-1))T] symbolically in Matlab:
Where :
G^(-1) = inv(G) and (G^(-1))T = transpose of (inv(G))
Yeah it's possible but it may take a lot of time, and also it's possible that your computer runs out of memory. Matlab's symbolic operations are not very good, but here's the solution. First define the elements of your matrix as symbolic variables. syms G11 defines G11 as symbolic. than define your G matrix and go on. Than you can find the A matrix.
I should also mention that since your matrix is 3*7, I don't know if matlab have inverse command for non-square matrix, but you can have pseudo inverse. And if you want to do symbolic computations, Maple and mathematica are so much better. But matlab is better in Numerical computations .
Related
I have the following matrix
R=(A-C)*inv(A+B-C-C')*(A-C');
where A and B are n by n matrices. I want to find n*n matrix C such that the determinant of R is minimized, SO:
C=arg min (det(R));
Is there any function in MATLAB that can handle this problem?
It seems like you are trying to find the minimum of an unconstrained multivariable function. This can probably be achieved with fminunc
fun = #(x)x(1)*exp(-(x(1)^2 + x(2)^2)) + (x(1)^2 + x(2)^2)/20;
x0 = [1,2];
[x,fval] = fminunc(fun,x0)
Note that there are no examples in the documentation where a matrix is used, this is probably because horrendous performance could be expected when trying to solve this problem for a matrix of any nontiny size. (This is not because of matlab, but because of the nature of the problem).
It is also good to realize that this method does not (cannot) guarantee an optimum, only a local optimum.
I need to pre-compute the histogram intersection kernel matrices for using LIBSVM in MATLAB.
Assume x, y are two vectors. The kernel function is K(x, y) = sum(min(x, y)). In order to be efficient, the best practice in most cases is to vectorize the operations.
What I want to do is like calculate the kernel matrices like calculating the euclidean distance between two matrices, like pdist2(A, B, 'euclidean'). After defining function 'intKernel', I could calculate the intersection kernel by calling pdist2(A, B, intKernel).
I know the function 'pdist2' may be an option. But I have no idea how to write the self-defined distance function. While, I do not know how to code the intersection kernel between vector(1-by-M) and matrix(M-by-N) in one condense expression.
'repmat' may not be feasible, because the matrix is really large, let us say, 20000-by-360000.
Any help would be appreciated.
Regards,
Peiyun
I think pdist2 is a good option, so I help you to define your distance function.
According to the doc, the self-defined distance function must have 2 inputs: first one is a 1-by-N vector; second one is a M-by-N matrix (be careful of the order!).
To avoid the use of repmat which is indeed memory-consumant, you can use bsxfun to apply some basic operations on data with expansion over singleton dimensions. In your case, you can do the following thing:
distance_kernel = #(x,Y) sum(bsxfun(#min,x,Y),2);
Summation is done over the columns to get a column vector as output.
Then just call pdist2 and you are done.
I need to solve the following SOCP in Matlab:
argmin_x ||R*x||_2 s.t. s^H * x = 1 and ||x||_2 < d,
where x is an Nx1 vector and R is an MxN matrix.
CVX can solve this type of problem. However, CVX requires me to give R and does not allow me to instead give a function handle that will return R*x. This is a problem for me since once R becomes large, computing R*x directly takes too long. There exists an efficient algorithm for computing R*x that I would like to take advantage of, so I am hoping that there is another SOCP solver that I could use.
I'd like to find the principal components of a data matrix X in Matlab by solving the optimization problem min||X-XBB'||, where the norm is the Frobenius norm, and B is an orthonormal matrix. I'm wondering if anyone could tell me how to do that. Ideally, I'd like to be able to do this using the optimization toolbox. I know how to find the principal components using other methods. My goal is to understand how to set up and solve an optimization problem which has a matrix as the answer. I'd very much appreciate any suggestions or comments.
Thanks!
MJ
The thing about Optimization is that there are different methods to solve a problem, some of which can require extensive computation.
Your solution, given the constraints for B, is to use fmincon. Start by creating a file for the non-linear constraints:
function [c,ceq] = nonLinCon(x)
c = 0;
ceq = norm((x'*x - eye (size(x))),'fro'); %this checks to see if B is orthonormal.
then call the routine:
B = fmincon(#(B) norm(X - X*B*B','fro'),B0,[],[],[],[],[],[],#nonLinCon)
with B0 being a good guess on what the answer will be.
Also, you need to understand that this algorithms tries to find a local minimum, which may not be the solution you ultimately want. For instance:
X = randn(1,2)
fmincon(#(B) norm(X - X*B*B','fro'),rand(2),[],[],[],[],[],[],#nonLinCon)
ans =
0.4904 0.8719
0.8708 -0.4909
fmincon(#(B) norm(X - X*B*B','fro'),rand(2),[],[],[],[],[],[],#nonLinCon)
ans =
0.9864 -0.1646
0.1646 0.9864
So be careful, when using these methods, and try to select a good starting point
The Statistics toolbox has a built-in function 'princomp' that does PCA. If you want to learn (in general, without the optimization toolbox) how to create your own code to do PCA, this site is a good resource.
Since you've specifically mentioned wanting to use the Optimization Toolbox and to set this up as an optimization problem, there is a very well-trusted 3rd-party package known as CVX from Stanford University that can solve the optimization problem you are referring to at this site.
Do you have the optimization toolbox? The documentation is really good, just try one of their examples: http://www.mathworks.com/help/toolbox/optim/ug/brg0p3g-1.html.
But in general the optimization function look like this:
[OptimizedMatrix, OptimizedObjectiveFunction] = optimize( (#MatrixToOptimize) MyObjectiveFunction(MatrixToOptimize), InitialConditionsMatrix, ...optional constraints and options... );
You must create MyObjectiveFunction() yourself, it must take the Matrix you want to optimize as an input and output a scalar value indicating the cost of the current input Matrix. Most of the optimizers will try to minimise this cost. Note that the cost must be a scalar.
fmincon() is a good place to start, once you are used to the toolbox you and if you can you should choose a more specific optimization algorithm for your problem.
To optimize a matrix rather than a vector, reshape the matrix to a vector, pass this vector to your objective function, and then reshape it back to the matrix within your objective function.
For example say you are trying to optimize the 3 x 3 matrix M. You have defined objective function MyObjectiveFunction(InputVector). Pass M as a vector:
MyObjectiveFunction(M(:));
And within the MyObjectiveFunction you must reshape M (if necessary) to be a matrix again:
function cost = MyObjectiveFunction(InputVector)
InputMatrix = reshape(InputVector, [3 3]);
%Code that performs matrix operations on InputMatrix to produce a scalar cost
cost = %some scalar value
end
I am working on a function that will generate polynomial interpolants for a given set of ordered pairs. You currently input the indexes of the node points in one vector, and the values of the function to be interpolated in a second vector. I then generate a symbolic expression for the Lagrange polynomial that interpolates that set of points. I would like to be able to go from this symbolic form to a vector form for comparison with test functions and such. That is, I have something that generates some polynomial P(x) in terms of some symbolic variable x. I would like to then sample this polynomial to a vector, and get values for the polynomial over (for example) linspace(-1,1,1000). If this is possible, how do I do it?
I guess I'll include the code that I have so far:
function l_poly = lpoly(x,f)
% Returns the polynomial interpolant as computed by lagrange's formula
syms a
n=size(x,2);
l_poly_vec = 1;
l_poly=0;
for k=1:n,
for l=1:n,
if (k ~= l)
l_poly_vec=l_poly_vec*(a-x(l))/(x(k)-x(l));
end
end
l_poly=l_poly+f(k)*l_poly_vec;
l_poly_vec = 1;
end
I plan on adding a third (or possibly fourth) input depending on how I can solve this issue. I'm guessing I would just need the length of the vector I want to sample to and the endpoints.
If I understand you correctly, you've constructed a Lagrange interpolating polynomial using the symbolic toolbox and now wish to evaluate it over a vector of values. One way to do this is to use the function sym2poly to extract the coefficients of the symbolic polynomial, and then use polyval to evaluate it. Alternatively, you could use matlabFunction to convert your symbolic expression into a regular Matlab function; or use subs to substitute in a numeric value for 'x'.
However, you would probably be better off avoiding the symbolic toolbox altogether and directly constructing the coefficients of the Lagrange interpolating polynomial, or, better yet, use a different interpolation scheme altogether. The function interp1 might be a good place to start.