Automatically generating systems of equations in MATLAB (similar issue in Python) - matlab

Let's say that I want to solve a system of N equations with N unknown variables. Assume the equation is of the following general form (I have simplified it greatly, and we can pretend each line is equal to zero).
x1 - const(1)
x2 - const(2)
x3 - const(3)
...
xN - const(N)
where x1, x2, ..., xN are my variables, and const is a vector of length N of constants determined earlier in the code. I do not know in advance (i.e. cannot hard-code) how many equations and variables there are, but I still wish to write and solve the system in a general way.
In MATLAB, my current solution is to do the following, where n_vars is the number of variables, which my program determines earlier on.
sym_vars = sym('x',[1 n_vars]);
for i = 1:n_vars
eqn(i) = sym_vars(i)-const(i);
end
This builds the system of equations. eqn, that I showed above. All of the numerical solvers (e.g. fsolve, lsqnonlin, ode45) that I plan to use require the system of equations to be defined as a function handle or as a separate function entirely. I can convert the symbolic expression to a function handle via matlabFunction or, if dealing with ODEs, via odeFunction to address this.
However, there are two main issues with this approach that I want to resolve. The first is that I do not want to have to make symbolic variables and rely on the symbolic toolbox if I am only performing numerical computations. The second is that if I am solving ODEs, the variables actually have to be x1(t), x2(t), x3(t), ..., xN(t) for odeFunction to work properly. However, using the same logic as my sym approach above to make these variables in a general way leads to a warning because character vectors that aren't valid variable names will not be allowed in future releases.
How can I write a system of equations using function handles instead of symbolic variables (or an equivalent solution)? Surely there must be a way to write a system of equations without doing so manually.

Use a vector function like
N=5;
const=1:5;
fsolve(#(x)x-const,zeros(1,N))
the result is:
1.0000 2.0000 3.0000 4.0000 5.0000

Related

NOT CONVERGE: use Newton Raphson-Method to find root of nonlinear equations

I tried non-linear polynomial functions and this code works well. But for this one I tried several methods to solve the linear equation df0*X=f0 using backslash or bicg or lsqr, also tried several initial values but the result never converge.
% Define the given function
syms x1 x2 x3
x=[x1,x2,x3];
f(x)=[3*x1-cos(x2*x3)-1/2;x1^2+81*(x2+0.1)^2-sin(x3)+1.06;...
exp(-x1*x2)+20*x3+1/3*(10*pi-3)];
% Define the stopping criteria based on Nither or relative errors
tol=10^-5;
Niter=100;
df=jacobian(f,x);
x0=[0.1;0.1;-0.1];
% Setting starting values
error=1;
i=0;
% Start the Newton-Raphson Iteration
while(abs(error)>tol)
f0=eval(f(x0(1),x0(2),x0(3)));
df0=eval(df(x0(1),x0(2),x0(3)));
xnew=x0-df0\f0; % also tried lsqr(df0,f0),bicg(df0,f0)
error=norm(xnew-x0);
x0=xnew;
i=i+1
if i>=Niter
fprintf('Iteration times spill over Niter\n');
return;
end
end
You'll need anonymous functions here to better do the job (we mentioned it in passing today!).
First, let's get the function definition down. Anonymous functions are nice ways for you to call things in a manner similar to mathematical functions. For example,
f = #(x) x^2;
is a squaring function. To evaluate it, just write like you would on paper f(2) say. Since you have a multivariate function, you'll need to vectorize the definition as follows:
f(x) = #(x) [3*x(1) - cos(x(2) * x(3)) - 1/2; ...
For your Jacobian, you'll need to use another anonymous function (maybe call it grad_f) and compute it on paper, then code it in. The function jacobian uses finite differences and so the errors may pile up with the Jacobian is not stable in some regions.
The key is to just be careful and use some good coding practices. See this document for more info on anonymous functions and other good MATLAB practices.

Matlab: Solving a linear system of anonymous functions

I have a system of equations...
dF(a,b,c)/da = 0;
dF(a,b,c)/db = 0;
dF(a,b,c)/dc = 0;
where a,b,c are unknown variable constants and dF/d* are anonymous functions of the variables. I have to solve for a,b and c in an optimization problem. When the system reduces to just one equation, I use Matlab's fzero to solve for the variable and it works. For example
var_a = fzero(#(a) dF(a)/da,0);
After noticing that fzero and fsolve give dramatically different answers for some cases I did some searching. From what I gather, fzero only works for a single equation of a single variable? So moving to a system of equations, I'd like to choose the most appropriate method. I've used Matlab's solve in the past, but I believe that is for symbolic expressions only? What is the best method for solving a linear system of anonymous functions, which all equal zero?
I tried the following, and got back results
vars = fsolve(#(V)[dF(V)/da;dF(V)/db;dF(V)/dc],zeros(1,3));
where vars contains all 3 variables, but after reading the examples in the previous link, Fsolve couldn't exactly find the zeros for x^2 and x^3. The solution vector in the system I presented above is all zeros and the functions are polynomials. Putting this all together, I'm wondering if fsolve isn't the best choice?
Can I build a system of calls to fzero? Something along the lines of
vars = [fzero(#(a) dF(a,b,c)/da,0);
fzero(#(b) dF(a,b,c)/db,0);
fzero(#(c) dF(a,b,c)/dc,0)];
which I don't think would work (how would each dF/d* get the other 2 variable inputs?) or would it?
Any thoughts?
You can numerically solve to minimize any function using 'lsqnonlin'. To adopt this for a system of equations, simply turn them into a single function with a vector input. Something like this:
fToMinimize = #(abc) ...
(dF(ABC(1),ABC(2),ABC(3))/da)^2 +...
(dF(ABC(1),ABC(2),ABC(3))/db)^2 +...
(dF(ABC(1),ABC(2),ABC(3))/dc)^2 ;
abcSolved = lsqnonlin(fToMinimize, [0 0 0])
If you have a guess for the values of a, b, and c, you can (and should) use those instead of the [0 0 0] vector. There are also many options within the lsqnonlin function to adjust behavior. For example how close to the best answer you want to get. If the functions are well behaved, you should be able to tighten the tolerance down a lot, if you are looking for a near exact answer.

How to find the intersections of two functions in MATLAB?

Lets say, I have a function 'x' and a function '2sin(x)'
How do I output the intersects, i.e. the roots in MATLAB? I can easily plot the two functions and find them that way but surely there must exist an absolute way of doing this.
If you have two analytical (by which I mean symbolic) functions, you can define their difference and use fzero to find a zero, i.e. the root:
f = #(x) x; %defines a function f(x)
g = #(x) 2*sin(x); %defines a function g(x)
%solve f==g
xroot = fzero(#(x)f(x)-g(x),0.5); %starts search from x==0.5
For tricky functions you might have to set a good starting point, and it will only find one solution even if there are multiple ones.
The constructs seen above #(x) something-with-x are called anonymous functions, and they can be extended to multivariate cases as well, like #(x,y) 3*x.*y+c assuming that c is a variable that has been assigned a value earlier.
When writing the comments, I thought that
syms x; solve(x==2*sin(x))
would return the expected result. At least in Matlab 2013b solve fails to find a analytic solution for this problem, falling back to a numeric solver only returning one solution, 0.
An alternative is
s = feval(symengine,'numeric::solve',2*sin(x)==x,x,'AllRealRoots')
which is taken from this answer to a similar question. Besides using AllRealRoots you could use a numeric solver, manually setting starting points which roughly match the values you have read from the graph. This wa you get precise results:
[fzero(#(x)f(x)-g(x),-2),fzero(#(x)f(x)-g(x),0),fzero(#(x)f(x)-g(x),2)]
For a higher precision you could switch from fzero to vpasolve, but fzero is probably sufficient and faster.

Solving a state-space (2nd order equation) with ode45 in MATLAB

I'm trying to teach myself how to use MATLAB for solving state-space systems, I have what seems to be a pretty straight-forward system but have been unable to find any decent straight-forward examples for a novice thus far.
I'd like a simple walk-through of how to translate the system into MATLAB, what variables to set, and how to solve for about 50(?) seconds (from t=0 to 50 or any value really).
I'd like to use ode45 since it's a 4th order method using a Runge-Kutta variant.
Here's the 2nd-order equation:
θ''+0.03|θ'|θ'+4pi^2*sinθ=0
The state-space:
x_1'=x_2
x_2'=-4pi^2*sin(x_1)-0.03|x_2|x_2
x_1 = θ, x_2 = θ'
θ(0)=pi/9 rad, θ'(0)=0, h(step)=1
You need a derivative function function, which, given the current state of the system and the current time, returns the derivative of all of the state variables. Generally this function is of the form
function xDash=derivative(t,x)
and xDash is a vector with the derivative of each element, and x is a vector of the state variables. If your variables are called x_1, x_2 etc. it's a good idea to put x_1 in x(1), etc. Then you need a formula for the derivative of each state variable in terms of the other state variables, for example you could have xDash_1=x_1-x_2 and you would code this as xDash(1)=x(1)-x(2). Hopefully that clears something up.
For your example, the derivative function will look like
function xDash=derivative(t,x)
xDash=zeros(2,1);
xDash(1)=x(2);
xDash(2)=-4*pi^2*sin(x(1))-0.03*abs(x(2))*x(2);
end
and you would solve the system using
[T,X]=ode45(#derivative,0:50,[pi/9 0]);
This gives output at t=0,1,2,...,50.

Using unspecified constants in matlab

I'm trying to solve a system of equations in the s-domain. So set up this system of equations in matrix form:
a=[.4*s+s+5 -5; -5 .5*s+5]
c=[3/s; 3/(2*s)]
(1/s)*a*b=c
I just get the error that s is undefined.
How can I solve for b in terms of s?
Matlab does not (naturally) do symbolic calculations --- which is what your code is trying to do. Matlab's variables need to be concrete numbers, or arrays, or structures, etc. They cannot just be placeholders for arbitrary numbers.
(UNLESS: You use the symbolic computing toolbox for Matlab. I haven't really used this because I prefer to do symbolic computing in environments such as Maple or Mathematica. You could even solve your problem on the Wolfram Alpha website)
But if you pick a specific value of s, computing what you want is easy:
s = 5;
a=[.4*s+s+5 -5; -5 .5*s+5];
c=[3/s; 3/(2*s)];
b = s*(a\c);
Where I have used the backslash operator for doing linear inversion.
You should now have that
(1/s)*a*b-c
is the zero vector.
EDIT: I looked into the symbolic toolbox. It looks like this is what you want (but you need to have the symbolic toolbox licensed and installed for it to work):
syms s;
a=[.4*s+s+5 -5; -5 .5*s+5];
c=[3/s; 3/(2*s)];
b = simple(s*(a\c))
The code to perform your calculation using symbolic operators is:
syms s; %This defines 's' as a symbolic token
a=[.4*s+s+5 -5; -5 .5*s+5]; %a and c inherit the symbolic properties from s
c=[3/s; 3/(2*s)];
result = solve('(1/s)*a*b=c','b') %Solve is the general symbolic toolbox algebraic solver.
This produces
result =
(c*s)/a
Generally speaking, Matlab performs best as a numerical toolbox. So depending on your application I would go with another approach, such as that demonstrated by Ian Hincks in another answer. But sometimes the situation demands a symbolic solution.