Which of the following is the most precise classification of a problem X? - classification

Which of the following is the most precise classification of a problem X?
X is in NP
X is in P
X is in O(n2)
X is in Θ(n2).
I would greatly appreciate if anyone could explain the answer of this to me?
I believe it's either NP or P, but i'm really not sure

NP or P means whether it can be solved in polynomial time in a non deterministic machine(NP) or in a deterministic machine(P). This reflects the complexity of the problem but not the complexity of an algorithm that solves the problem.
While O(n^2) means that the algorithm being analyzed to solve a problem has an upper bound of n^2 complexity when n is the input.
Theta(n^2) is also a way of expressing the complexity of an algorithm used to solve a problem but Theta(n^2) in contrast of O(n^2) means that that the lower and upper bound of complexity is n^2.
There's also another measure which is o(little-oh) which indicates the lower bound complexity of an algorithm.
Theta is more precise because like O(n^2) means just upper bound, the algorithm is also O(n^3) and O(n!).

Θ(n2) ⊂ O(n2) ⊂ P ⊆ NP

Related

eigenvalue optimization over uncertain matrix

I have a really tricky optimization problem to run in Matlab, and I hope somebody could help.
I have a NxN matrix M (which is symmetric and dense, it is a correlation matrix if this helps) which is uncertain but bounded (Mmin < M < Mmax), and I know the bounds.
I have to consider the last n eigenvalues (so, having all the N eigenvalues of the matrix, I discard the (N-n) bigger ones).
What I need is to run two constrained optimization problems to find the two sets of this n eigenvalues within all the possible matrices M, Mmin < M < Mmax, which respectively, maximize and minimize the values of all the n eigenvalues (i.e. the set of n eigenvalues, discarding the N-n bigger ones, for which each eigenvalue is as large/small as possible). I know that this can be done with some special kind of genetic algorithm considering that the eigenvalues are real.
The problem is that my matrix is something like 2000x2000 and I discard the 4 biggest eigenvalues, so I would like to build some kind of computationally efficient code.
Any suggestions on how to make this would be really appreciated.
Thanks in advance
J.

Least-squares minimization within threshold in MATLAB

The cvx suite for MATLAB can solve the (seemingly innocent) optimization problem below, but it is rather slow for the large, full matrices I'm working with. I'm hoping this is because using cvx is overkill, and that the problem actually has an analytic solution, or that a clever use of some built-in MATLAB functions can more quickly do the job.
Background: It is well-known that both x1=A\b and x2=pinv(A)*b solve the least-squares problem:
minimize norm(A*x-b)
with the distinction that norm(x2)<=norm(x1). In fact, x2 is the minimum-norm solution to the problem, so norm(x2)<=norm(x) for all possible solutions x.
Defining D=norm(A*x2-b), (equivalently D=norm(A*x1-b)), then x2 solves the problem
minimize norm(x)
subject to
norm(A*x-b) == D
Problem: I'd like to find the solution to:
minimize norm(x)
subject to
norm(A*x-b) <= D+threshold
In words, I don't need norm(A*x-b) to be as small as possible, just within a certain tolerance. I want the minimum-norm solution x that gets A*x within D+threshold of b.
I haven't been able to find an analytic solution to the problem (like using the pseudoinverse in the classic least-squares problem) on the web or by hand. I've been searching things like "least squares with nonlinear constraint" and "least squares with threshold".
Any insights would be greatly appreciated, but I suppose my real question is:
What is the fastest way to solve this "thresholded" least-squares problem in MATLAB?
Interesting question. I do not know the answer to your exact question, but a working solution is presented below.
Recap
Define res(x) := norm(Ax - b).
As you state, x2 minimizes res(x). In the overdetermined case (typically A having more rows than col's), x2 is the unique minimum. In the underdetermined case, it is joined by infinitely many others*. However, among all of these, x2 is the unique one that minimizes norm(x).
To summarize, x2 minimizes (1) res(x) and (2) norm(x), and it does so in that order of priority. In fact, this characterizes (fully determines) x2.
The limit characterization
But, another characterization of x2 is
x2 := limit_{e-->0} x_e
where
x_e := argmin_{x} J(x;e)
where
J(x;e) := res(x) + e * norm(x)
It can be shown that
x_e = (A A' + e I)^{-1} A' b (eqn a)
It should be appreciated that this characterization of x2 is quite magical. The limits exists even if (A A')^{-1} does not. And the limit somehow preserves priority (2) from above.
Using e>0
Of course, for finite (but small) e, x_e will not minimize res(x)(instead it minimzes J(x;e)). In your terminology, the difference is the threshold. I will rename it to
gap := res(x_e) - min_{x} res(x).
Decreasing the value of e is guaranteed to decrease the value of the gap. Reaching a specific gap value (i.e. the threshold) is therefore easy to achieve by tuning e.**
This type of modification (adding norm(x) to the res(x) minimization problem) is known as regularization in statistics litterature, and is generally considered a good idea for stability (numerically and with respect to parameter values).
*: Note that x1 and x2 only differ in the underdetermined case
**:It does not even require any heavy computations, because the inverse in (eqn a) is easily computed for any (positive) value of e if the SVD of A has already been computed.

What is benefit to use SVD for solving Ax=b

I have a linear equation such as
Ax=b
where A is full rank matrix which its size is 512x512. b is a vector of 512x1. x is unknown vector. I want to find x, hence, I have some options for doing that
1.Using the normal way
inv(A)*b
2.Using SVD ( Singular value decomposition)
[U S V]=svd(A);
x = V*(diag(diag(S).^-1)*(U.'*b))
Both methods give the same result. So, what is benefit of using SVD to solve Ax=b, especially in the case A is a 2D matrix?
Welcome to the world of numerical methods, let me be your guide.
You, as a new person in this world wonders, "Why would I do something this difficult with this SVD stuff instead of the so commonly known inverse?! Im going to try it in Matlab!"
And no answer was found. That is, because you are not looking at the problem itself! The problems arise when you have an ill-conditioned matrix. Then the computing of the inverse is not possible numerically.
example:
A=[1 1 -1;
1 -2 3;
2 -1 2];
try to invert this matrix using inv(A). Youll get infinite.
That is, because the condition number of the matrix is very high (cond(A)).
However, if you try to solve it using SVD method (b=[1;-2;3]) you will get a result. This is still a hot research topic. Solving Ax=b systems with ill condition numbers.
As #Stewie Griffin suggested, the best way to go is mldivide, as it does a couple of things behind it.
(yeah, my example is not very good because the only solution of X is INF, but there is a way better example in this youtube video)
inv(A)*b has several negative sides. The main one is that it explicitly calculates the inverse of A, which is both time demanding, and may result in inaccuracies if values vary by many orders of magnitude.
Although it might be better than inv(A)*b, using svd is not the "correct" approach here. The MATLAB-way to do this is using mldivide, \. Using this, MATLAB chooses the best algorithm to solve the linear system based on its properties (Hermation, upper Hessenberg, real and positive diagonal, symmetric, diagonal, sparse etc.). Often, the solution will be a LU-triangulation with partial permutation, but it varies. You'll have a hard time beating MATLABs implementation of mldivide, but using svd might give you some more insight of the properties of the system if you actually investigates U, S, V. If you don't want to do that, do with mldivide.

how to convert a matrix to a diagonally dominant matrix using pivoting in Matlab

Hi I am trying to solve a linear system of the following type:
A*x=b,
where A is the coefficient matrix,
x is the vectors of unknowns and
b is the vector of solution.
The coefficient matrix (A) is a n-by-n sparse matrix, with even zeros in the diagonal. In order to solve this system in an accurate way I am using an iterative method in Matlab called bicgstab (Biconjugate gradients stabilized method).
This coefficient matrix (A) has a
det(A)=-4.1548e-05 and a rcond(A)= 1.1331e-04.
Therefore the matrix is ill-conditioned. I first try to perform a scaling and the results where:
det(A)= -1.2612e+135 but the rcond(A)=5.0808e-07...
Therefore the matrix is still ill-conditioned... I verify and the sum of all absolute value of the non-diagonal elements where 163.60 and the sum of all absolute value of the diagonal elements where 32.49... Therefore the matrix of coefficient is not diagonally dominant and will not converge using my function bicgstab...
I am looking for someone that can help me with performing a pivoting to the coefficient matrix (A) so it can be diagonally dominant. Or any advice to solve this problem....
Thanks for the help.
First there should be quite a few things noted here:
Don't use the determinant to estimate the "amount of singularity" of your matrix. The determinant is the product of all the eigenvalues of your matrix, and therefore its scaling can be wildly misleading compared to a much better measure like the condition number, leading to the next point..
your conditioning (according to rcond) isn't that bad, are you working with single or double precision? Large problems can routinely get condition numbers in this range and still be quite solvable, but of course this depends on a very complicated interaction of many factors, of which the condition number plays only a small part. This leads to another complicated point:
Diagonal dominance may not help you at all here. BiCGStab as far as I know does not require diagonal dominance for its convergence, and also I don't think diagonal dominance is known even to help it. Diagonal dominance is usually an assumption made by other iterative methods such as the Jacobi method or Gauss-Seidel. Actually the convergence behavior of BiCGStab is not very well understood at all, and it is usually only used when memory is a very severe problem but conjugate gradients is not applicable.
If you are really interested in using a Krylov method (such as BiCGStab) to solve your problem, then you generally need to have more understanding of where your matrix is coming from so that you can choose a sensible preconditioner.
So this calls for a bit more information. Do you know more about this matrix? Is it arising from some kind of physical problem? Do you know for example if it is symmetric or positive definite (I will assume not both because you are not using CG).
Let me end with some actionable advice which is very generic, and so not necessarily optimal:
If memory is not an issue, consider using restarted GMRES instead of BiCGStab. My experience is that GMRES has much more robust convergence.
Try an approximate factorization preconditioner such as ILU. MATLAB has a function for this built in.

Numerical Instability Kalman Filter in MatLab

I am trying to run a standard Kalman Filter algorithm to calculate likelihoods, but I keep getting a problema of a non positive definite variance matrix when calculating normal densities.
I've researched a little and seen that there may be in fact some numerical instabitlity; tried some numerical ways to avoid a non-positive definite matrix, using both choleski decomposition and its variant LDL' decomposition.
I am using MatLab.
Does anyone suggest anything?
Thanks.
I have encountered what might be the same problem before when I needed to run a Kalman filter for long periods but over time my covariance matrix would degenerate. It might just be a problem of losing symmetry due to numerical error. One simple way to enforce your covariance matrix (let's call it P) to remain symmetric is to do:
P = (P + P')/2 # where P' is transpose(P)
right after estimating P.
post your code.
As a rule of thumb, if the model is not accurate and the regularization (i.e. the model noise matrix Q) is not sufficiently "large" an underfitting will occur and the covariance matrix of the estimator will be ill-conditioned. Try fine tuning your Q matrix.
The Kalman Filter implemented using the Joseph Form is known to be numerically unstable, as any old timer who once worked with single precision implementation of the filter can tell. This problem was discovered zillions of years ago and prompt a lot of research in implementing the filter in a stable manner. Probably the best well-known implementation is the UD, where the Covariance matrix is factorized as UDU' and the two factors are updated and propagated using special formulas (see Thoronton and Bierman). U is an upper diagonal matrix with "1" in its diagonal, and D is a diagonal matrix.