MATLAB: How to solve linear system modulo m - matlab

Does anyone know what functions are available for solving linear systems when the equations are actually congruences mod m? The desire is to solve a linear system (Ax = b) for values x in which "Ax is congruent to b"
A discussion of how to perform gaussian elimination in this situation can be found here, but I was hoping to use MATLAB rather than attempting to do it by hand.

Have a look at the lincon() method found here:
http://www.mathworks.com/matlabcentral/fileexchange/32856-system-of-linear-congruences/content/lincon.m

Related

Boolean least squares

For a spectrum estimation algorithm I need to find the best fitting linear combination of vectors to fit a target spectral distribution. So far, this works relatively well using the lsqlin optimizer in MATLAB.
However, for the final application I would like to approximate/solve this problem for exclusively zeros and ones, meaning Ax=b solved for Boolean x.
Is there any way to parametrize lsqlin or another optimizer function for this purpose?
If the problem is just:
Solve Ax=b for x in {0,1}
then you can use a MIP solver (e.g. Matlab intlinprog). If the problem is over-constrained and you want a least squares solution:
Min w'w
S.t. Ax - b = w
x in {0,1} (binary variable)
w free variable
then you have a MIQP (Mixed Integer Quadratic Programming) problem. There are good solvers for this such as Cplex and Gurobi (callable from Matlab). Also Matlab has a discussion about an approximation scheme using intlinprog. Another idea is to replace the quadratic objective by a sum of absolute values. This can be formulated as linear MIP model.

What is benefit to use SVD for solving Ax=b

I have a linear equation such as
Ax=b
where A is full rank matrix which its size is 512x512. b is a vector of 512x1. x is unknown vector. I want to find x, hence, I have some options for doing that
1.Using the normal way
inv(A)*b
2.Using SVD ( Singular value decomposition)
[U S V]=svd(A);
x = V*(diag(diag(S).^-1)*(U.'*b))
Both methods give the same result. So, what is benefit of using SVD to solve Ax=b, especially in the case A is a 2D matrix?
Welcome to the world of numerical methods, let me be your guide.
You, as a new person in this world wonders, "Why would I do something this difficult with this SVD stuff instead of the so commonly known inverse?! Im going to try it in Matlab!"
And no answer was found. That is, because you are not looking at the problem itself! The problems arise when you have an ill-conditioned matrix. Then the computing of the inverse is not possible numerically.
example:
A=[1 1 -1;
1 -2 3;
2 -1 2];
try to invert this matrix using inv(A). Youll get infinite.
That is, because the condition number of the matrix is very high (cond(A)).
However, if you try to solve it using SVD method (b=[1;-2;3]) you will get a result. This is still a hot research topic. Solving Ax=b systems with ill condition numbers.
As #Stewie Griffin suggested, the best way to go is mldivide, as it does a couple of things behind it.
(yeah, my example is not very good because the only solution of X is INF, but there is a way better example in this youtube video)
inv(A)*b has several negative sides. The main one is that it explicitly calculates the inverse of A, which is both time demanding, and may result in inaccuracies if values vary by many orders of magnitude.
Although it might be better than inv(A)*b, using svd is not the "correct" approach here. The MATLAB-way to do this is using mldivide, \. Using this, MATLAB chooses the best algorithm to solve the linear system based on its properties (Hermation, upper Hessenberg, real and positive diagonal, symmetric, diagonal, sparse etc.). Often, the solution will be a LU-triangulation with partial permutation, but it varies. You'll have a hard time beating MATLABs implementation of mldivide, but using svd might give you some more insight of the properties of the system if you actually investigates U, S, V. If you don't want to do that, do with mldivide.

Finding MaxiMin Solution of Function in Matlab

I would like to find the maximin solution of a function f in Matlab (below is the definition of maximin)
x and y are both real vectors and f is smooth but 'quite complex to calculate' (it is formed from the output of a neural network).
I tried an alternating approach of holding x constant and minimising for y and then holding y constant and maximising for x but this did not converge and instead oscillated.
I believe you can use genetic algorithms to solve the problem but firstly could not see how to do it in Matlab and secondly thought it may be a waste of the fact that f is smooth.
I have both the Optimization and Global Optimization toolbox. What is the best way to solve this problem in Matlab?

Solve Ax = b using MATLAB

I have a linear system of equations AX = B to solve in MATLAB. What I have known is A is sparse, positive-definite and symmetric. I know the command x = A \ b works yet I am not sure MATLAB takes full advantage of A's good properties so as to maximize the efficiency. Is there any way to specify the algorithm to solve it, for example Conjugate Gradient algorithm in MATLAB?
If your matrix is sparse, you can use all these iterative functions, for example bicg for a biconjugate gradients method.
MATLAB's mldivide operator does indeed take advantage of properties of A. See the documentation for details - expand the "Algorithm" section.

How to solve complex system of equations using matlab?

I have to analyze 802.11 saturation throughput using matlab, and here is my problem. I'm trying to solve parametric equations below (parameters are m,W,a) using solve function and i get
Warning: Explicit solution could not be found
How could I solve above equations using matlab?
I guess you were trying to find an analytical solution for tau and p using symbolic math. Unless you're really lucky with your parameters (e.g. m=1), there won't be an analytical solution.
If you're interested in numerical values for tau and p, I suggest you manually substitue p in the first equation, and then solve an equation of the form tau-bigFraction=0 using, e.g. fzero.
Here's how you'd use fzero to solve a simple equation kx=exp(-x), with k being a parameter.
k = 5; %# set a value for k
x = fzero(#(x)k*x-exp(-x),0); %# initial guess: x=0