I am trying to diagonalize the Bogoliubov-de Gennes Hamiltonian. The problem is that the Hamiltonian contains a Laplacian. This could be solved by using a discretized Laplacian
If I use a simple Hamiltonian (just the Laplacian) and use eig(l1dd(...,...)) with l1dd the discretized Laplacian (2 and -1's) as in https://people.sc.fsu.edu/~jburkardt/m_src/laplacian/laplacian.html, Then I get:
I expect a linear or quadratic dispersion ($\omega \propto k or k^2$), but I get neither. What could be the problem? It seems that eig does a good job, so probably l1dd is the problem?
Related
I want to obtain the natural frequencies of a simple mechanical system with mass matrix M and stiffness matrix K (.mat-file -> Download):
Mx''(t)+Kx(t)=0 (x= Position).
It means basically, that I have to solve det(K-w^2*M)=0. But how can I solve it in Matlab (or if necessary reduce it to a standard eigenvalue problem and solve it then)? The matrices are definitely solvable with Abaqus (FEM Software), but I have to solve it in Matlab.
I tried the following without success: det(K-w^2*M)=0 => det(M^-1*K-w^2*I)=0 (I := unity matrix)
But solving this eigenvalue problem with
sqrt(eigs(K*M^-1))
delivers wrong values and the warning:
"Matrix is singular to working precision.
In matlab.internal.math.mpower.viaMtimes (line 35)"
Other wrong values can be obtained via det(K-w^2*M)=0 => det(I/(w^2)-M*K^-1)=0:
1./sqrt(eigs(M*K^-1))
Any hint would help me. Thanks in advance.
As #Arpi mentioned, you actually want to solve the generalized eigenvalue problem:
K*x = w^2*M*x
Since your matrices K and M are apparently singular (or just one of them), it is not possible to use eigs, but you have to use eig:
V = eig(K,M);
w = sqrt(V);
For a spectrum estimation algorithm I need to find the best fitting linear combination of vectors to fit a target spectral distribution. So far, this works relatively well using the lsqlin optimizer in MATLAB.
However, for the final application I would like to approximate/solve this problem for exclusively zeros and ones, meaning Ax=b solved for Boolean x.
Is there any way to parametrize lsqlin or another optimizer function for this purpose?
If the problem is just:
Solve Ax=b for x in {0,1}
then you can use a MIP solver (e.g. Matlab intlinprog). If the problem is over-constrained and you want a least squares solution:
Min w'w
S.t. Ax - b = w
x in {0,1} (binary variable)
w free variable
then you have a MIQP (Mixed Integer Quadratic Programming) problem. There are good solvers for this such as Cplex and Gurobi (callable from Matlab). Also Matlab has a discussion about an approximation scheme using intlinprog. Another idea is to replace the quadratic objective by a sum of absolute values. This can be formulated as linear MIP model.
I am trying to run a standard Kalman Filter algorithm to calculate likelihoods, but I keep getting a problema of a non positive definite variance matrix when calculating normal densities.
I've researched a little and seen that there may be in fact some numerical instabitlity; tried some numerical ways to avoid a non-positive definite matrix, using both choleski decomposition and its variant LDL' decomposition.
I am using MatLab.
Does anyone suggest anything?
Thanks.
I have encountered what might be the same problem before when I needed to run a Kalman filter for long periods but over time my covariance matrix would degenerate. It might just be a problem of losing symmetry due to numerical error. One simple way to enforce your covariance matrix (let's call it P) to remain symmetric is to do:
P = (P + P')/2 # where P' is transpose(P)
right after estimating P.
post your code.
As a rule of thumb, if the model is not accurate and the regularization (i.e. the model noise matrix Q) is not sufficiently "large" an underfitting will occur and the covariance matrix of the estimator will be ill-conditioned. Try fine tuning your Q matrix.
The Kalman Filter implemented using the Joseph Form is known to be numerically unstable, as any old timer who once worked with single precision implementation of the filter can tell. This problem was discovered zillions of years ago and prompt a lot of research in implementing the filter in a stable manner. Probably the best well-known implementation is the UD, where the Covariance matrix is factorized as UDU' and the two factors are updated and propagated using special formulas (see Thoronton and Bierman). U is an upper diagonal matrix with "1" in its diagonal, and D is a diagonal matrix.
I have a set of 4 PDEs:
du/dt + A(u) * du/dx = Q(u)
where,u is a matrix and contains:
u=[u1;u2;u3;u4]
and A is a 4*4 matrix. Q is 4*1. A and Q are function of u=[u1;u2;u3;u4].
But my questions are:
How can I solve above equation in MATLAB?
If I solved it by PDE functions of Matlab,can I convert it to a
simple function that is not used from ready functions of Matlab?
Is there any way that I calculate A and Q explicitly. I mean that in
every time step, I calculate A and Q from data of previous time step
and put new value in the equation that causes faster run of program?
PDEs require finite differences, finite elements, boundary elements, etc. You can also turn them into ODEs using transforms like Laplace, Fourier, etc. Solve those using ODE functions and then transform back. Neither one is trivial.
Your equation is a non-linear transient diffusion equation. It's a parabolic PDE.
The equation you posted has the additional difficulty of being non-linear, because both the A matrix and Q vector are functions of the independent variable q. You'll have to start by linearizing your equations. Solve for increments in u rather than u itself.
Once you've done that, discretize the du/dx term using finite differences, finite elements, or boundary elements. You should start with a weighted residual integral formulation.
You're almost done: Next to integrate w.r.t. time using the method of your choice.
It's not trivial.
Google found this: maybe it will help you.
http://www.mathworks.com/matlabcentral/fileexchange/3710-nonlinear-diffusion-toolbox
I'm trying to find eigenvalues of a matrix without using eig function (my homework says so). In Matlab, I define the matrix and identity matrix. But I cannot set up this equation:
A - x*I
x here is lambda, A is the matrix that I should find eigenvalues of and I is the identity matrix. If you know how to find eigenvalues, you supposed to understand this. How can I go through?
you can get some inspiration here: http://en.wikipedia.org/wiki/Eigenvalue_algorithm
if the matrix is fixed size, you can easily do the det(A-lambda*eye)=0 solving by yourself and use that.
With power iteration you can already find the dominant eigenvalue, and I knew there was an extension to this algorithm to also find the other eigenvalues, but cannot recall how that works :(