Boolean least squares - matlab

For a spectrum estimation algorithm I need to find the best fitting linear combination of vectors to fit a target spectral distribution. So far, this works relatively well using the lsqlin optimizer in MATLAB.
However, for the final application I would like to approximate/solve this problem for exclusively zeros and ones, meaning Ax=b solved for Boolean x.
Is there any way to parametrize lsqlin or another optimizer function for this purpose?

If the problem is just:
Solve Ax=b for x in {0,1}
then you can use a MIP solver (e.g. Matlab intlinprog). If the problem is over-constrained and you want a least squares solution:
Min w'w
S.t. Ax - b = w
x in {0,1} (binary variable)
w free variable
then you have a MIQP (Mixed Integer Quadratic Programming) problem. There are good solvers for this such as Cplex and Gurobi (callable from Matlab). Also Matlab has a discussion about an approximation scheme using intlinprog. Another idea is to replace the quadratic objective by a sum of absolute values. This can be formulated as linear MIP model.

Related

Numerical Instability Kalman Filter in MatLab

I am trying to run a standard Kalman Filter algorithm to calculate likelihoods, but I keep getting a problema of a non positive definite variance matrix when calculating normal densities.
I've researched a little and seen that there may be in fact some numerical instabitlity; tried some numerical ways to avoid a non-positive definite matrix, using both choleski decomposition and its variant LDL' decomposition.
I am using MatLab.
Does anyone suggest anything?
Thanks.
I have encountered what might be the same problem before when I needed to run a Kalman filter for long periods but over time my covariance matrix would degenerate. It might just be a problem of losing symmetry due to numerical error. One simple way to enforce your covariance matrix (let's call it P) to remain symmetric is to do:
P = (P + P')/2 # where P' is transpose(P)
right after estimating P.
post your code.
As a rule of thumb, if the model is not accurate and the regularization (i.e. the model noise matrix Q) is not sufficiently "large" an underfitting will occur and the covariance matrix of the estimator will be ill-conditioned. Try fine tuning your Q matrix.
The Kalman Filter implemented using the Joseph Form is known to be numerically unstable, as any old timer who once worked with single precision implementation of the filter can tell. This problem was discovered zillions of years ago and prompt a lot of research in implementing the filter in a stable manner. Probably the best well-known implementation is the UD, where the Covariance matrix is factorized as UDU' and the two factors are updated and propagated using special formulas (see Thoronton and Bierman). U is an upper diagonal matrix with "1" in its diagonal, and D is a diagonal matrix.

Linear program with double constraints on the same variable

I have a linear program of the form min(f*x) s.t. A1*x < d1; A2*x < d2. The form with one constraint is implemented in Matlab in command linprog. What command can I use to solve linear program with two constrraints?
I could of course create a block diagonal matrix, and double the size of the variable x, but if there is more efficient way I would like to use it, because the size of the matrix is quite large.
Possibly I don't understand the question right but can't you combine the matrixes A1 und A2 by A = [A1; A2]?
You maybe interested in Dantzig-Wolfe Decomposition algorithm for solving linear programming. It takes advantage of this block diagonal structure. However, I don't think there is an out-of-the box implementation of it in commercial softwares.

Finding MaxiMin Solution of Function in Matlab

I would like to find the maximin solution of a function f in Matlab (below is the definition of maximin)
x and y are both real vectors and f is smooth but 'quite complex to calculate' (it is formed from the output of a neural network).
I tried an alternating approach of holding x constant and minimising for y and then holding y constant and maximising for x but this did not converge and instead oscillated.
I believe you can use genetic algorithms to solve the problem but firstly could not see how to do it in Matlab and secondly thought it may be a waste of the fact that f is smooth.
I have both the Optimization and Global Optimization toolbox. What is the best way to solve this problem in Matlab?

MATLAB: Eigenvalue Analysis for System of Homogeneous Second order Equations with Damping Terms

I have a system of dynamic equations that ultimately can be written in the well-known "spring-mass-damper" form:
[M]{q''}+[C]{q'}+[K]{q}={0}
[M], [C], [K]: n-by-n Coefficient Matrices
{q}: n-by-1 Vector of the Degrees of Freedom
(the ' mark represents a time derivative)
I want to find the eigenvalues and eigenvectors of this system. Obviously due to the term [C]{q'}, the standard MATLAB function eig() will not be useful.
Does anyone know of a simple MATLAB routine to determine the eigenvalues, eigenvectors of this system? The system is homogeneous so an efficient eigenvalue analysis should be very feasible, but I'm struggling a bit.
Obviously I can use brute force and a symbolic computing software to find the gigantic characteristic polynomial. But this seems inefficient for me, especially because I'm looping this through the other parts of the code to determine frequencies as a function of other varied parameters.

Minimizing error of a formula in MATLAB (Least squares?)

I'm not too familiar with MATLAB or computational mathematics so I was wondering how I might solve an equation involving the sum of squares, where each term involves two vectors- one known and one unknown. This formula is supposed to represent the error and I need to minimize the error. I think I'm supposed to use least squares but I don't know too much about it and I'm wondering what function is best for doing that and what arguments would represent my equation. My teacher also mentioned something about taking derivatives and he formed a matrix using derivatives which confused me even more- am I required to take derivatives?
The problem that you must be trying to solve is
Min u'u = min \sum_i u_i^2, u=y-Xbeta, where u is the error, y is the vector of dependent variables you are trying to explain, X is a matrix of independent variables and beta is the vector you want to estimate.
Since sum u_i^2 is diferentiable (and convex), you can evaluate the minimal of this expression calculating its derivative and making it equal to zero.
If you do that, you find that beta=inv(X'X)X'y. This maybe calculated using the matlab function regress http://www.mathworks.com/help/stats/regress.html or writing this formula in Matlab. However, you should be careful how to evaluate the inverse (X'X) see Most efficient matrix inversion in MATLAB