I want to solve the following two equations using fsolve:
p*x(1) - x(2) - exp(-x(1))=0 .... (1)
-x(1) + 2*x(2) - exp(-x(2))=0.....(2)
where "p" is the coefficient that I want to vary from -3 to +3, i.e., -3:0.1:3. This works if the value of p is manually specified each time, as below:
x=fsolve(#myfun,x0)
function F = myfun(x)
F = [-3*x(1) - x(2) - exp(-x(1));
-x(1) + 2*x(2) - exp(-x(2))];
Can Matlab vary the value of p automatically?
You can do it in one line using arrayfun, though it looks kind of convoluted and simply looping over p might be faster:
function F = myfun(x,p)
F = [p*x(1) - x(2) - exp(-x(1));
-x(1) + 2*x(2) - exp(-x(2))];
p_values = -3:.1:3;
x = arrayfun(#(p)fsolve(#(x)myfun(x,p), x0), p_values, 'UniformOutput', false)
and x is then an cell array containing the different answers. x{1} contains the solution at p = p_values(1), etc.
To make x a matrix with each column a solution (i.e. x(:,1) is the solution with p = p_values(1)), just call cell2mat:
x = cell2mat(x);
Related
How explicitly does Matlab solve the rightmost equation in c1, specifically ((x-1)\y)?
I am well aware what happens when you use a matrix, e.g. A\b (where A is a matrix and b is a column vector), but what happens when you use backslash on two vectors with equal rows?
The problem:
x = (1:3)';
y = ones(3,1);
c1 = ((x-1)\y) % why does this =0.6?
You're doing [0;1;2]\[1;1;1]. Essentially x=0.6 is the least-squares solution to
[0;1;2]*x=[1;1;1]
The case you have from the documentation is the following:
If A is a rectangular m-by-n matrix with m ~= n, and B is a matrix with m rows, then A\B returns a least-squares solution to the system of equations A*x= B.
(Specifically, you have m=3, n=1).
A = (1:3).' - 1; % = [0;1;2]
B = ones(3,1); % = [1;1;1]
x = A\B; % = 0.6
Algebraically, it's easy to see this is the solution to the least-squares minimisation
% Calculate least squares
leastSquares = sum( ((A*x) - B).^2 )
= sum( ([0;1;2]*x - [1;1;1]).^2 )
= sum( [-1; x-1; 2x-1].^2 )
= 1 + (x-1)^2 + (2x-1)^2
= 1 + x^2 - 2*x + 1 + 4*x^2 - 4*x + 1
= 5*x^2 - 6*x + 3
% Minimum least squares -> derivative = 0
d(leastSquares)/dx = 10*x - 6 = 0
10*x = 6
x = 0.6
I have no doubt that MATLAB uses a more sophisticated algorithm to come to the same conclusion, but this lays out the mathematics in a fairly plain way.
You can see experimentally that there is no better solution by testing the following for various values of x... 0.6 gives the smallest error:
sum( ([0;1;2]*x - [1;1;1]).^2 )
Introduction
NOTE IN CODE AND DISUSSION:
A single d is first derivative A double d is second derivative
I am using Matlab to simulate some dynamic systems through numerically solving the governing LaGrange Equations. Basically a set of Second Order Ordinary Differential Equations. I am using ODE45. I found a great tutorial from Mathworks (link for tutorial below) on how to solve a basic set of second order ordinary differential equations.
https://www.mathworks.com/academia/student_center/tutorials/source/computational-math/solving-ordinary-diff-equations/player.html
Based on the tutorial I simulated the motion for an elastic spring pendulum by obtaining two second order ordinary differential equations (one for angle theta and the other for spring elongation) shown below:
theta double prime equation:
M*thetadd*(L + del)^2 + M*g*sin(theta)*(L + del) + M*deld*thetad*(2*L + 2*del) = 0
del (spring elongation) double prime equation:
K*del + M*deldd - (M*thetad^2*(2*L + 2*del))/2 - M*g*cos(theta) = 0
Both equations above have form ydd = f(x, xd, y, yd)
I solved the set of equations by a common reduction of order method; setting column vector z to [theta, thetad, del, deld] and therefore zd = [thetad, thetadd, deld, deldd]. Next I used two matlab files; a simulation file and a function handle file for ode45. See code below of simulation file and function handle file:
Simulation File
%ElasticPdlmSymMainSim
clc
clear all;
%Define parameters
global M K L g;
M = 1;
K = 25.6;
L = 1;
g = 9.8;
% define initial values for theta, thetad, del, deld
theta_0 = 0;
thetad_0 = .5;
del_0 = 1;
deld_0 = 0;
initialValues = [theta_0, thetad_0, del_0, deld_0];
% Set a timespan
t_initial = 0;
t_final = 36;
dt = .01;
N = (t_final - t_initial)/dt;
timeSpan = linspace(t_final, t_initial, N);
% Run ode45 to get z (theta, thetad, del, deld)
[t, z] = ode45(#OdeFunHndlSpngPdlmSym, timeSpan, initialValues);
Here is the function handle file:
function dz = OdeFunHndlSpngPdlmSym(~, z)
% Define Global Parameters
global M K L g
% Take output from SymDevFElSpringPdlm.m file for fy1 and fy2 and
% substitute into z2 and z4 respectively
% z1 and z3 are simply z2 and z4
% fy1=thetadd=z(2)= -(M*g*sin(z1)*(L + z3) + M*z2*z4*(2*L + 2*z3))/(M*(L + z3)^2)
% fy2=deldd=z(4)=((M*(2*L + 2*z3)*z2^2)/2 - K*z3 + M*g*cos(z1))/M
% return column vector [thetad; thetadd; deld; deldd]
dz = [z(2);
-(M*g*sin(z(1))*(L + z(3)) + M*z(2)*z(4)*(2*L + 2*z(3)))/(M*(L + z(3))^2);
z(4);
((M*(2*L + 2*z(3))*z(2)^2)/2 - K*z(3) + M*g*cos(z(1)))/M];
Question
However, I am coming across systems of equations where the variables can not be solved for explicitly as is the case with spring pendulum example. For one case I have the following set of ordinary differential equations:
y double prime equation
ydd - .5*L*(xdd*sin(x) + xd^2*cos(x) + (k/m)*y - g = 0
x double prime equation
.33*L^2*xdd - .5*L*ydd*sin(x) - .33*L^2*C*cos(x) + .5*g*L*sin(x) = 0
L, g, m, k, and C are given parameters.
Note that x'' term appears in y'' equation and y'' term appears in x'' equation so I am not able to use reduction of order method. Can I use Matlab ODE45 to solve the set of ordinary differential equations in the second example in a manner similar to first example?
Thanks!
This problem can be solved by working out some of the math by hand. The equations are linear in xdd and ydd so it should be straightforward to solve.
ydd - .5*L*(xdd*sin(x) + xd^2*cos(x)) + (k/m)*y - g = 0
.33*L^2*xdd - .5*L*ydd*sin(x) - .33*L^2*C*cos(x) + .5*g*L*sin(x) = 0
can be rewritten as
-.5*L*sin(x)*xdd + ydd = -.5*L*xd^2*cos(x) - (k/m)*y + g
.33*L^2*xdd - .5*L*sin(x)*ydd = .33*L^2*C*cos(x) - .5*g*L*sin(x)
which is the form A*x=b.
For more complex systems, you can look into the fsolve function.
I want to solve:
I use following MATLAB code, but it does not work.
Can someone please guide me?
function f=objfun
f=-f;
function [c1,c2,c3]=constraint(x)
a1=1.1; a2=1.1; a3=1.1;
c1=f-log(a1)-log(x(1)/(x(1)+1));
c2=f-log(a2)-log(x(2)/(x(2)+1))-log(1-x(1));
c3=f-log(a3)-log(1-x(1))-log(1-x(2));
x0=[0.01;0.01];
[x,fval]=fmincon('objfun',x0,[],[],[],[],[0;0],[1;1],'constraint')
You need to flip the problem around a bit. You are trying to find the point x (which is (l_1,l_2)) that makes the minimum of the 3 LHS functions the largest. So, you can rewrite your problem as, in pseudocode,
maximise, by varying x in [0,1] X [0,1]
min([log(a1)+log(x(1)/(x(1)+1)) ...
log(a2)+log(x(2)/(x(2)+1))+log(1-x(1)) ...
log(a3)+log(1-x(1))+log(1-x(2))])
Since Matlab has fmincon, rewrite this as a minimisation problem,
minimise, by varying x in [0,1] X [0,1]
max(-[log(a1)+log(x(1)/(x(1)+1)) ...
log(a2)+log(x(2)/(x(2)+1))+log(1-x(1)) ...
log(a3)+log(1-x(1))+log(1-x(2))])
So the actual code is
F=#(x) max(-[log(a1)+log(x(1)/(x(1)+1)) ...
log(a2)+log(x(2)/(x(2)+1))+log(1-x(1)) ...
log(a3)+log(1-x(1))+log(1-x(2))])
[L,fval]=fmincon(F,[0.5 0.5])
which returns
L =
0.3383 0.6180
fval =
1.2800
Can also solve this in the convex optimization package CVX with the following MATLAB code:
cvx_begin
variables T(1);
variables x1(1);
variables x2(1);
maximize(T)
subject to:
log(a1) + x1 - log_sum_exp([0, x1]) >= T;
log(a2) + x2 - log_sum_exp([0, x2]) + log(1 - exp(x1)) >= T;
log(a3) + log(1 - exp(x1)) + log(1 - exp(x2)) >= T;
x1 <= 0;
x2 <= 0;
cvx_end
l1 = exp(x1); l2 = exp(x2);
To use CVX, each constraint and the objective function has to be written in a way that is proveably convex using CVX's ruleset. Making the substitution x1 = log(l1) and x2 = log(l2) allows one to do that. Note that: log_sum_exp([0,x1]) = log(exp(0) + exp(x1)) = log(1 + l1)
This also returns the answers: l1 = .3383, l2 = .6180, T = -1.2800
This question is connected to this one. Suppose again the following code:
syms x
f = 1/(x^2+4*x+9)
Now taylor allows the function f to be expanded about infinity:
ts = taylor(f,x,inf,'Order',100)
But the following code
c = coeffs(ts)
produces errors, because the series does not contain positive powers of x (it contains negative powers of x).
In such a case, what code should be used?
Since the Taylor Expansion around infinity was likely performed with the substitution y = 1/x and expanded around 0, I would explicitly make that substitution to make the power positive for use on coeffs:
syms x y
f = 1/(x^2+4x+9);
ts = taylor(f,x,inf,'Order',100);
[c,ty] = coeffs(subs(ts,x,1/y),y);
tx = subs(ty,y,1/x);
The output from taylor is not a multivariate polynomial, so coeffs won't work in this case. One thing you can try is using collect (you may get the same or similar result from using simplify):
syms x
f = 1/(x^2 + 4*x + 9);
ts = series(f,x,Inf,'Order',5) % 4-th order Puiseux series of f about 0
c = collect(ts)
which returns
ts =
1/x^2 - 4/x^3 + 7/x^4 + 8/x^5 - 95/x^6
c =
(x^4 - 4*x^3 + 7*x^2 + 8*x - 95)/x^6
Then you can use numden to extract the numerator and denominator from either c or ts:
[n,d] = numden(ts)
which returns the following polynomials:
n =
x^4 - 4*x^3 + 7*x^2 + 8*x - 95
d =
x^6
coeffs can then be used on the numerator. You may find other functions listed here helpful as well.
I have to solve a system of ordinary differential equations of the form:
dx/ds = 1/x * [y* (g + s/y) - a*x*f(x^2,y^2)]
dy/ds = 1/x * [-y * (b + y) * f()] - y/s - c
where x, and y are the variables I need to find out, and s is the independent variable; the rest are constants. I've tried to solve this with ode45 with no success so far:
y = ode45(#yprime, s, [1 1]);
function dyds = yprime(s,y)
global g a v0 d
dyds_1 = 1./y(1) .*(y(2) .* (g + s ./ y(2)) - a .* y(1) .* sqrt(y(1).^2 + (v0 + y(2)).^2));
dyds_2 = - (y(2) .* (v0 + y(2)) .* sqrt(y(1).^2 + (v0 + y(2)).^2))./y(1) - y(2)./s - d;
dyds = [dyds_1; dyds_2];
return
where #yprime has the system of equations. I get the following error message:
YPRIME returns a vector of length 0, but the length of initial
conditions vector is 2. The vector returned by YPRIME and the initial
conditions vector must have the same number of elements.
Any ideas?
thanks
Certainly, you should have a look at your function yprime. Using some simple model that shares the number of differential state variables with your problem, have a look at this example.
function dyds = yprime(s, y)
dyds = zeros(2, 1);
dyds(1) = y(1) + y(2);
dyds(2) = 0.5 * y(1);
end
yprime must return a column vector that holds the values of the two right hand sides. The input argument s can be ignored because your model is time-independent. The example you show is somewhat difficult in that it is not of the form dy/dt = f(t, y). You will have to rearrange your equations as a first step. It will help to rename x into y(1) and y into y(2).
Also, are you sure that your global g a v0 d are not empty? If any one of those variables remains uninitialized, you will be multiplying state variables with an empty matrix, eventually resulting in an empty vector dyds being returned. This can be tested with
assert(~isempty(v0), 'v0 not initialized');
in yprime, or you could employ a debugging breakpoint.
the syntax for ODE solvers is [s y]=ode45(#yprime, [1 10], [2 2])
and you dont need to do elementwise operation in your case i.e. instead of .* just use *