I am trying to optimize the function under the mathematical assumption given below (it essentially breaks down the current interval in the code into multiple subintervals but how do I even implement it?):
[the mathematical theory]- It is well known that the Trapezoid rule gives a more accurate approximation if the intervals are broken up into smaller intervals so that: I1 = [a; b1], I2 = [b1; b2], I3 = [b2; b3],...,I n-1 = [b n-1, bn] where bn = b. Write a program that implements this strategy using your NC.m code from above. It should be able to complete the task for an arbitrary n. How many sub-intervals must be created to get an "accurate" integral approximation of the function listed below on the interval [-3:0]?
%For this problem write a script file called NC.m that implements
%the Newton-Cotes method of integration for an arbitrary function f(x). It
%should takes as inputs the function and the limits of integration [a: b] and
%output the value of the definite integral. Specifically, you should use the
%Trapezoid rule as presented in Equation (11.73)
function [f]= NC(a,b,fun) %newton-cotes
%a and b are limits of integration
%setting it up
fa= fun(a); %y value for lower limit
fb= fun(b); %y value for upper limit
%the actual function
f= (b-a)*(fa+fb)/2;
end
%Result from estimation
%fun= #(x) normpdf(x)
%[f]= NC(-3,0,fun)- 0.6051
%not accurate when compared to results from actual calculation
%syms x
%f= normpdf(x);
%a= -3;- lower limit
%b= 0;- higher limit
%int(f, a, b)- 0.4897
Please help. It would be greatly appreciated!
Related
How do I solve the following system of equations on MATLAB when one of the elements of the variable vector is a constant? Please do give the code if possible.
More generally, if the solution is to use symbolic math, how will I go about generating large number of variables, say 12 (rather than just two) even before solving them?
For example, create a number of symbolic variables using syms, and then make the system of equations like below.
syms a1 a2
A = [matrix]
x = [1;a1;a2];
y = [1;0;0];
eqs = A*x == y
sol = solve(eqs,[a1, a2])
sol.a1
sol.a2
In case you have a system with many variables, you could define all the symbols using syms, and solve it like above.
You could also perform a parameter optimization with fminsearch. First you have to define a cost function, in a separate function file, in this example called cost_fcn.m.
function J = cost_fcn(p)
% make sure p is a vector
p = reshape(p, [length(p) 1]);
% system of equations, can be linear or nonlinear
A = magic(12); % your system, I took some arbitrary matrix
sol = A*p;
% the goal of the system of equations to reach, can be zero, or some other
% vector
goal = zeros(12,1);
% calculate the error
error = goal - sol;
% Use a cost criterion, e.g. sum of squares
J = sum(error.^2);
end
This cost function will contain your system of equations, and goal solution. This can be any kind of system. The vector p will contain the parameters that are being estimated, which will be optimized, starting from some initial guess. To do the optimization, you will have to create a script:
% initial guess, can be zeros, or some other starting point
p0 = zeros(12,1);
% do the parameter optimization
p = fminsearch(#cost_fcn, p0);
In this case p0 is the initial guess, which you provide to fminsearch. Then the values of this initial guess will be incremented, until a minimum to the cost function is found. When the parameter optimization is finished, p will contain the parameters that will result in the lowest error for your system of equations. It is however possible that this is a local minimum, if there is no exact solution to the problem.
Your system is over-constrained, meaning you have more equations than unknown, so you can't solve it. What you can do is find a least square solution, using mldivide. First re-arrange your equations so that you have all the constant terms on the right side of the equal sign, then use mldivide:
>> A = [0.0297 -1.7796; 2.2749 0.0297; 0.0297 2.2749]
A =
0.029700 -1.779600
2.274900 0.029700
0.029700 2.274900
>> b = [1-2.2749; -0.0297; 1.7796]
b =
-1.274900
-0.029700
1.779600
>> A\b
ans =
-0.022191
0.757299
I have the following ODE:
x_dot = 3*x.^0.5-2*x.^1.5 % (Equation 1)
I am using ode45 to solve it. My solution is given as a vector of dim(k x 1) (usually k = 41, which is given by the tspan).
On the other hand, I have made a model that approximates the model from (1), but in order to compare how accurate this second model is, I want to solve it (solve the second ODE) by means of ode45. My problem is that this second ode is given discrete:
x_dot = f(x) % (Equation 2)
f is discrete and not a continuous function like in (1). The values I have for f are:
0.5644
0.6473
0.7258
0.7999
0.8697
0.9353
0.9967
1.0540
1.1072
1.1564
1.2016
1.2429
1.2803
1.3138
1.3435
1.3695
1.3917
1.4102
1.4250
1.4362
1.4438
1.4477
1.4482
1.4450
1.4384
1.4283
1.4147
1.3977
1.3773
1.3535
1.3263
1.2957
1.2618
1.2246
1.1841
1.1403
1.0932
1.0429
0.9893
0.9325
0.8725
What I want now is to solve this second ode using ode45. Hopefully I will get a solution very similar that the one from (1). How can I solve a discrete ode applying ode45? Is it possible to use ode45? Otherwise I can use Runge-Kutta but I want to be fair comparing the two methods, which means that I have to solve them by the same way.
You can use interp1 to create an interpolated lookup table function:
fx = [0.5644 0.6473 0.7258 0.7999 0.8697 0.9353 0.9967 1.0540 1.1072 1.1564 ...
1.2016 1.2429 1.2803 1.3138 1.3435 1.3695 1.3917 1.4102 1.4250 1.4362 ...
1.4438 1.4477 1.4482 1.4450 1.4384 1.4283 1.4147 1.3977 1.3773 1.3535 ...
1.3263 1.2957 1.2618 1.2246 1.1841 1.1403 1.0932 1.0429 0.9893 0.9325 0.8725];
x = 0:0.25:10
f = #(xq)interp1(x,fx,xq);
Then you should be able to use ode45 as normal:
tspan = [0 1];
x0 = 2;
xout = ode45(#(t,x)f(x),tspan,x0);
Note that you did not specify what values of of x your function (fx here) is evaluated over so I chose zero to ten. You'll also not want to use the copy-and-pasted values from the command window of course because they only have four decimal places of accuracy. Also, note that because ode45 required the inputs t and then x, I created a separate anonymous function using f, but f can created with an unused t input if desired.
We have an equation similar to the Fredholm integral equation of second kind.
To solve this equation we have been given an iterative solution that is guaranteed to converge for our specific equation. Now our only problem consists in implementing this iterative prodedure in MATLAB.
For now, the problematic part of our code looks like this:
function delta = delta(x,a,P,H,E,c,c0,w)
delt = #(x)delta_a(x,a,P,H,E,c0,w);
for i=1:500
delt = #(x)delt(x) - 1/E.*integral(#(xi)((c(1)-c(2)*delt(xi))*ms(xi,x,a,P,H,w)),0,a-0.001);
end
delta=delt;
end
delta_a is a function of x, and represent the initial value of the iteration. ms is a function of x and xi.
As you might see we want delt to depend on both x (before the integral) and xi (inside of the integral) in the iteration. Unfortunately this way of writing the code (with the function handle) does not give us a numerical value, as we wish. We can't either write delt as two different functions, one of x and one of xi, since xi is not defined (until integral defines it). So, how can we make sure that delt depends on xi inside of the integral, and still get a numerical value out of the iteration?
Do any of you have any suggestions to how we might solve this?
Using numerical integration
Explanation of the input parameters: x is a vector of numerical values, all the rest are constants. A problem with my code is that the input parameter x is not being used (I guess this means that x is being treated as a symbol).
It looks like you can do a nesting of anonymous functions in MATLAB:
f =
#(x)2*x
>> ff = #(x) f(f(x))
ff =
#(x)f(f(x))
>> ff(2)
ans =
8
>> f = ff;
>> f(2)
ans =
8
Also it is possible to rebind the pointers to the functions.
Thus, you can set up your iteration like
delta_old = #(x) delta_a(x)
for i=1:500
delta_new = #(x) delta_old(x) - integral(#(xi),delta_old(xi))
delta_old = delta_new
end
plus the inclusion of your parameters...
You may want to consider to solve a discretized version of your problem.
Let K be the matrix which discretizes your Fredholm kernel k(t,s), e.g.
K(i,j) = int_a^b K(x_i, s) l_j(s) ds
where l_j(s) is, for instance, the j-th lagrange interpolant associated to the interpolation nodes (x_i) = x_1,x_2,...,x_n.
Then, solving your Picard iterations is as simple as doing
phi_n+1 = f + K*phi_n
i.e.
for i = 1:N
phi = f + K*phi
end
where phi_n and f are the nodal values of phi and f on the (x_i).
How solve a system of ordinary differential equation ..an initial value problem ....with parameters dependent on time or independent variable?
say the equation I have
Dy(1)/dt=a(t)*y(1)+b(t)*y(2);
Dy(2)/dt=-a(t)*y(3)+b(t)*y(1);
Dy(3)/dt=a(t)*y(2);
where a(t) is a vector and b(t) =c*a(t); where the value of a and b are changing with time not in monotone way and each time step.
I tried to solve this using this post....but when I applied the same principle ...I got the error message
"Error using griddedInterpolant The point coordinates are not
sequenced in strict monotonic order."
Can someone please help me out?
Please read until the end to see whether the first part or second part of the answer is relevant to you:
Part 1:
First create an .m file with a function that describe your calculation and functions that will give a and b. For example: create a file called fun_name.m that will contain the following code:
function Dy = fun_name(t,y)
Dy=[ a(t)*y(1)+b(t)*y(2); ...
-a(t)*y(3)+b(t)*y(1); ...
a(t)*y(2)] ;
end
function fa=a(t);
fa=cos(t); % or place whatever you want to place for a(t)..
end
function fb=b(t);
fb=sin(t); % or place whatever you want to place for b(t)..
end
Then use a second file with the following code:
t_values=linspace(0,10,101); % the time vector you want to use, or use tspan type vector, [0 10]
initial_cond=[1 ; 0 ; 0];
[tv,Yv]=ode45('fun_name',t_values,initial_cond);
plot(tv,Yv(:,1),'+',tv,Yv(:,2),'x',tv,Yv(:,3),'o');
legend('y1','y2','y3');
Of course for the fun_name.m case I wrote you need not use sub functions for a(t) and b(t), you can just use the explicit functional form in Dy if that is possible (like cos(t) etc).
Part 2: If a(t) , b(t) are just vectors of numbers you happen to have that cannot be expressed as a function of t (as in part 1), then you'll need to have also a time vector for which each of them happens, this can be of course the same time you'll use for the ODE, but it need not be, as long as an interpolation will work. I'll treat the general case, when they have different time spans or resolutions. Then you can do something of the following, create the fun_name.m file:
function Dy = fun_name(t, y, at, a, bt, b)
a = interp1(at, a, t); % Interpolate the data set (at, a) at times t
b = interp1(at, b, t); % Interpolate the data set (bt, b) at times t
Dy=[ a*y(1)+b*y(2); ...
-a*y(3)+b*y(1); ...
a*y(2)] ;
In order to use it, see the following script:
%generate bogus `a` ad `b` function vectors with different time vectors `at` and `bt`
at= linspace(-1, 11, 74); % Generate t for a in a generic case where their time span and sampling can be different
bt= linspace(-3, 33, 122); % Generate t for b
a=rand(numel(at,1));
b=rand(numel(bt,1));
% or use those you have, but you also need to pass their time info...
t_values=linspace(0,10,101); % the time vector you want to use
initial_cond=[1 ; 0 ; 0];
[tv,Yv]= ode45(#(t,y) fun_name(t, y, at, a, bt, b), t_values, initial_cond); %
plot(tv,Yv(:,1),'+',tv,Yv(:,2),'x',tv,Yv(:,3),'o');
legend('y1','y2','y3');
Perhaps this is more of a math question than a MATLAB one, not really sure. I'm using MATLAB to compute an economic model - the New Hybrid ISLM model - and there's a confusing step where the author switches the sign of the solution.
First, the author declares symbolic variables and sets up a system of difference equations. Note that the suffixes "a" and "2t" both mean "time t+1", "2a" means "time t+2" and "t" means "time t":
%% --------------------------[2] MODEL proc-----------------------------%%
% Define endogenous vars ('a' denotes t+1 values)
syms y2a pi2a ya pia va y2t pi2t yt pit vt ;
% Monetary policy rule
ia = q1*ya+q2*pia;
% ia = q1*(ya-yt)+q2*pia; %%option speed limit policy
% Model equations
IS = rho*y2a+(1-rho)*yt-sigma*(ia-pi2a)-ya;
AS = beta*pi2a+(1-beta)*pit+alpha*ya-pia+va;
dum1 = ya-y2t;
dum2 = pia-pi2t;
MPs = phi*vt-va;
optcon = [IS ; AS ; dum1 ; dum2; MPs];
Edit: The equations that are going into the matrix, as they would appear in a textbook are as follows (curly braces indicate time period values, greek letters are parameters):
First equation:
y{t+1} = rho*y{t+2} + (1-rho)*y{t} - sigma*(i{t+1}-pi{t+2})
Second equation:
pi{t+1} = beta*pi{t+2} + (1-beta)*pi{t} + alpha*y{t+1} + v{t+1}
Third and fourth are dummies:
y{t+1} = y{t+1}
pi{t+1} = pi{t+1}
Fifth is simple:
v{t+1} = phi*v{t}
Moving on, the author computes the matrix A:
%% ------------------ [3] Linearization proc ------------------------%%
% Differentiation
xx = [y2a pi2a ya pia va y2t pi2t yt pit vt] ; % define vars
jopt = jacobian(optcon,xx);
% Define Linear Coefficients
coef = eval(jopt);
B = [ -coef(:,1:5) ] ;
C = [ coef(:,6:10) ] ;
% B[c(t+1) l(t+1) k(t+1) z(t+1)] = C[c(t) l(t) k(t) z(t)]
A = inv(C)*B ; %(Linearized reduced form )
As far as I understand, this A is the solution to the system. It's the matrix that turns time t+1 and t+2 variables into t and t+1 variables (it's a forward-looking model). My question is essentially why is it necessary to reverse the signs of all the partial derivatives in B in order to get this solution? I'm talking about this step:
B = [ -coef(:,1:5) ] ;
Reversing the sign here obviously reverses the sign of every component of A, but I don't have a clear understanding of why it's necessary. My apologies if the question is unclear or if this isn't the best place to ask.
I think the key is that the model is forward-looking, so the slopes (the partial derivatives) need to be reversed to go backward in time. One way to think of it is to say that the jacobian() function always calculates derivatives in the forward-time direction.
You've got an output vector of states called optcon = [IS;AS;dum1;dum2;MPs], and two vectors of input states [y2 pi2 y pi v]. The input vector at time t+1 is [y2a pi2a ya pia va], and the input vector at time t is [y2t pi2t yt pit vt]. These two are concatenated into a single vector for the call to jacobian(), then separated after. The same thing could have been done in two calls. The first 5 columns of the output of jacobian() are the partial derivatives of optcon with respect to the input vector at time t+1, and the second 5 columns are with respect to the input vector at time t.
In order to get the reduced form, you need to come up with two equations for optcon at time t+1. The second half of coef is just what is needed. But the first half of coef is the equation for optcon at time t+2. The trick is to reverse the signs of the partial derivatives to get linearized coefficients that take the input vector at t+1 to the output optcon at t+1.