I am trying to calculate the integral of a function in Matlab and Mathematica that the software cannot do symbolically.
Here is my MatLab code so far, but I understand it may not be very helpful as is.
f = #(t) asin(0.5*sin(t));
a = #(t) sin(t);
F = int(f,t) % Matlab can't do this
F =
int(asin(sin(t)/2), t)
A = int(a,t) % This works
A =
-cos(t)
dt = 1/(N-1); % some small number
for i=1:N
F(i) = integral(f,(i-1)*dt,i*dt);
A(i) = integral(a,(i-1)*dt,i*dt);
end
Both of the calculations in the for loop give a rough approximation of f or a not their integrals after multiplying by dt.
On the math stack-exchange I found a question that derives a finite difference like method for the integral at a point. However, when I did the calculation in Matlab it output a scaled down version of f which was evident after plotting (see above for what I mean by scaled down). I think that's because for smaller intervals the integral basically approximates the function to varying degrees of accuracy (again see above).
I am trying to get either a symbolic equation for the integral, or an approximation of the integral of the function at each location.
So my question is then if I have a function f that MatLab and Mathematica cannot easily take the integral of
can I approximate the integral directly with an integral calculator besides the default ones? (int,integral,trapz)
or
can I approximate the function with finite differences first and then evaluate the integral symbolically?
Your code is nearly fine it's just that
for i=1:N
F(i) = integral(f,0,i*dt);
end
You could also do
F(1)=integral(f,0,dt)
for i=2:N
F(i) = F(i-1)+integral(f,(i-1)*dt,i*dt);
end
Second option is surely more efficient
Because the primitive is really F(x)=int(f(x), 0, x) (0 defines a certain constant ) and for sufficiently small dx you have shown that f(x)=int(f(x), x,x+dx)/dx i. You have proven that MATLAB intégral function does its job.
For example let's take = the function above will compute if you wish to compute just replace 0 above by the constant a you like.
now and so you should get F containing a discretization of
The accepted answer in general is by far the best method I would say but if certain restrictions on your functions are allowable then there is a second method.
For two functions f and g see below
T = 1; % Period
NT = 1; % Number of periods
dt = 0.01; % time interval
time = 0:dt:NT*T; % time
syms t
x = K*sin(2*pi*t+B); % edit as appropriate
% f = A/tanh(K)*tanh(K*sin(2*pi*t+p))
% g = A/asin(K)*asin(K*sin(2*pi*t+p))
formulas found here
f = A1/tanh(K1)*(2^(2*1)-1)*2^(2*1)*bernoulli(2*1)/factorial(2*1)*x^(2*1-1);
% |K1|<pi/2
g = A2/asin(K2)*factorial(2*0)/(2^(2*0)*factorial(0)^2*(2*0+1))*x^(2*0+1);
% |K2|<1
there are no such limitations in the accepted answer
N = 60;
for k=2:N
a1 = (2^(2*k)-1)*2^(2*k)*bernoulli(2*k)/factorial(2*k);
f = f + A1/tanh(K1)*a1*x^(2*k-1);
a2 = factorial(2*k)/(2^(2*k)*factorial(k)^2*(2*k+1));
g = g + A2/asin(K2)*a*x^(2*k+1);
end
MATLAB can calculate sin^n(t) for n being an integer.
F = int(f,t);
phi = double(subs(F,t,time));
G = int(g,t);
psi = double(subs(G,t,time));
Related
In the integral
I want to optimize the function Dt, as I know the end result of the integral. I have expressions for k1 and k0 in terms of k2 and N, and it is k2 and N that I would like to optimize. They have constraints, needing to be between certain values. I have it all setup in my code, but I am just unaware of how to tell the genetic alogrithm to optimize an integral function? Is there something I'm missing here? The integral is usually evaluated numerically but I am trying to go backwards, and assuming I know an answer find the input parameters
EDIT:
All right, so here's my code. I know the integral MUST add up to a known value, and I know the value, so I need to optimize the variables with that given parameter. I have created an objective function y= integral - DT. I kept theta as syms because it is the thing being integrated to give DT.
function y = objective(k)
% Define constants
AU = astroConstants(2);
mu = astroConstants(4);
% Define start and finish parameters for the exponential sinusoid.
r1 = AU; % Initial radius
psi = pi/2; % Final polar angle of Mars/finish transfer
phi = pi/2;
r2 = 1.5*AU;
global k1
k1 = sqrt( ( (log(r1/r2) + sin(k(1)*(psi + 2*pi*k(2)))*tan(0)/k(1)) / (1-
cos(k(1)*(psi+2*pi*k(2)))) )^2 + tan(0)^2/k(1)^2 );
k0 = r1/exp(k1*sin(phi));
syms theta
R = k0*exp(k1*sin(k(1)*theta + phi));
syms theta
theta_dot = sqrt((mu/(R^3))*1/((tan(0))^2 + k1*(k(1))^2*sin(k(1)*theta +
phi) + 1));
z = 1/theta_dot;
y = int(z, theta, 0,(psi+2*pi*k(2))) - 1.3069e08;
global x
x=y;
end
my k's are constrained, and the following is the constraint function. I'm hoping what I have done here is tell it that the function MUST = 0.
function [c,c_eq] = myconstraints(k)
global k1 x
c = [norm(k1*(k(1)^2))-1 -norm(k1*(k(1)^2))];
c_eq =[x];
end
And finally, my ga code looks like this. Honestly, I've been playing with it all night and getting error messages after error messages - ranging from "constraint function must return real value" to "error in fcnvectorizer" and "unable to convert expression into double array", with the last two coming after i've removed the constraints.
clc; clear;
ObjFcn = #objective;
nvars = 2
LB = [0 2];
UB = [1 7];
ConsFcn = #myconstraints;
[k,fval] = ga(ObjFcn,nvars,[],[],[],[],LB,UB,ConsFcn);
I've been stuck on this problem for weeks and have gotten nowhere, even with searching through literature.
How to solve the function f(x)=ln(x^2)-0.7=0 with a known Matlab command?
clc;clear all;close all;
f(x)=ln(x^2)-0.7=0
B=sqrt f(x)
You can use symbolic variables together with the solve function:
syms x;
eqn = log(x^2) - 0.7 == 0;
solve(eqn,x)
The above code will output:
ans =
exp(7/20)
-exp(7/20)
Since the equation is quadratic, the solver returns two distinct solutions (often people forget that quadratic equations may have two specular solutions, one positive and one negative).
If you want to retrieve the numerical values (for example, in order to calculate their sqrt value):
sol = solve(eqn,x);
num = double(sol)
num =
1.4191
-1.4191
Put the following code into a MATLAB script, name it "main.m".
function b=main
clc
x=solveF()
y=f(x)
b=sqrt(y)
end
function y=f(x)
y=log(x^2)-0.7
end
function x=solveF()
g = #(x) abs(f(x)-0)
x = fminsearch(g, 1.0)
end
Then run it as:
main
You will get the results:
x =
1.4190
y =
-3.4643e-05
b =
0.0000 + 0.0059i
ans =
0.0000 + 0.0059i
You can define equations in matlab as such:
f = #(x) log(x^2)-0.7;
B = #(x) sqrt(f(x));
If you want to find the value of x satisfying a constraint you can design a function that will be equal to zero when the constraint is respecte, then call fminsearch to find x:
f_constraint = #(x) abs(f(x)-0);
x_opt = fminsearch(f_constraint, 1.3); % function handle, initial estimate
In your example, B(x_opt) should be equal to zero. This is not exactly the case as fminsearch estimated a solution.
I have a question about the use of Matlab to compute solution of stochastic differentials equations. The equations are the 2.2a,b, page 3, in this paper (PDF).
My professor suggested using ode45 with a small time step, but the results do not match with those in the article. In particular the time series and the pdf. I also have a doubt about the definition of the white noise in the function.
Here the code for the integration function:
function dVdt = R_Lang( t,V )
global sigma lambda alpha
W1=sigma*randn(1,1);
W2=sigma*randn(1,1);
dVdt=[alpha*V(1)+lambda*V(1)^3+1/V(1)*0.5*sigma^2+W1;
sigma/V(1)*W2];
end
Main script:
clear variables
close all
global sigma lambda alpha
sigma=sqrt(2*0.0028);
alpha=3.81;
lambda=-5604;
tspan=[0,10];
options = odeset('RelTol',1E-6,'AbsTol',1E-6,'MaxStep',0.05);
A0=random('norm',0,0.5,[2,1]);
[t,L]=ode45(#(t,L) R_Lang(t,L),tspan,A0,options);
If you have any suggestions I'd be grateful.
Here the new code to confront my EM method and 'sde_euler'.
lambda = -5604;
sigma=sqrt(2*0.0028) ;
Rzero = 0.03; % problem parameters
phizero=-1;
dt=1e-5;
T = 0:dt:10;
N=length(T);
Xi1 = sigma*randn(1,N); % Gaussian Noise with variance=sigma^2
Xi2 = sigma*randn(1,N);
alpha=3.81;
Rem = zeros(1,N); % preallocate for efficiency
Rtemp = Rzero;
phiem = zeros(1,N); % preallocate for efficiency
phitemp = phizero;
for j = 1:N
Rtemp = Rtemp + dt*(alpha*Rtemp+lambda*Rtemp^3+sigma^2/(2*Rtemp)) + sigma*Xi1(j);
phitemp=phitemp+sigma/Rtemp*Xi2(j);
phiem(j)=phitemp;
Rem(j) = Rtemp;
end
f = #(t,V)[alpha*V(1)+lambda*V(1)^3+0.5*sigma^2/V(1)/2;
0]; % Drift function
g = #(t,V)[sigma;
sigma/V(1)]; % Diffusion function
A0 = [0.03;0]; % 2-by-1 initial condition
opts = sdeset('RandSeed',1,'SDEType','Ito'); % Set random seed, use Ito formulation
L = sde_euler(f,g,T,A0,opts);
plot(T,Rem,'r')
hold on
plot(T,L(:,1),'b')
Thanks again for the help !
ODEs and SDEs are very different and one should not use tools for ODEs, like ode45, to try to solve SDEs. Looking at the paper you linked to, they used a basic Euler-Maruyama scheme to integrate the system. This a very simple solver to implement yourself.
Before proceeding, you (and your professor!) should take some time to read up on SDEs and how to solve them numerically. I recommend this paper, which includes many Matlab examples:
Desmond J. Higham, 2001, An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations, SIAM Rev. (Educ. Sect.), 43 525–46. http://dx.doi.org/10.1137/S0036144500378302
The URL to the Matlab files in the paper won't work; use this one. Note, that as this a 15-year old paper, some of the code related to random number generation is out of date (use rng(1) instead of randn('state',1) to seed the generator).
If you are familiar with ode45 you might look at my SDETools Matlab toolbox on GitHub. It was designed to be fast and has an interface that works very similarly to Matlab's ODE suite. Here is how you might code up your example using the Euler-Maruyma solver:
sigma = 1e-1*sqrt(2*0.0028);
lambda = -5604;
alpha = 3.81;
f = #(t,V)[alpha*V(1)+lambda*V(1)^3+0.5*sigma^2/V(1);
0]; % Drift function
g = #(t,V)[sigma;
sigma/V(1)]; % Diffusion function
dt = 1e-3; % Time step
t = 0:dt:10; % Time vector
A0 = [0.03;-2]; % 2-by-1 initial condition
opts = sdeset('RandSeed',1,'SDEType','Ito'); % Set random seed, use Ito formulation
L = sde_euler(f,g,t,A0,opts); % Integrate
figure;
subplot(211);
plot(t,L(:,2));
ylabel('\phi');
subplot(212);
plot(t,L(:,1));
ylabel('r');
xlabel('t');
I had to reduce the size of sigma or the noise was so large that it could cause the radius variable to go negative. I'm not sure if the paper discusses how they handle this singularity. You can try the 'NonNegative' option within sdeset to try to handle this or you may need to construct your own solver. I also couldn't find what integration time step the paper used. You should also consider contacting the authors of the paper directly.
UPDATE
Here's an Euler-Maruyama implementation that matches the sde_euler code above:
sigma = 1e-1*sqrt(2*0.0028);
lambda = -5604;
alpha = 3.81;
f = #(t,V)[alpha*V(1)+lambda*V(1)^3+0.5*sigma^2/V(1);
0]; % Drift function
g = #(t,V)[sigma;
sigma/V(1)]; % Diffusion function
dt = 1e-3; % Time step
t = 0:dt:10; % Time vector
A0 = [0.03;-2]; % 2-by-1 initial condition
% Create and initialize state vector (L here is transposed relative to sde_euler output)
lt = length(t);
n = length(A0);
L = zeros(n,lt);
L(:,1) = A0;
% Set seed and pre-calculate Wiener increments with order matching sde_euler
rng(1);
r = sqrt(dt)*randn(lt-1,n).';
% General Euler-Maruyama integration loop
for i = 1:lt-1
L(:,i+1) = L(:,i)+f(t(i),L(:,i))*dt+r(:,i).*g(t(i),L(:,i));
end
figure;
subplot(211);
plot(t,L(2,:));
ylabel('\phi');
subplot(212);
plot(t,L(1,:));
ylabel('r');
xlabel('t');
I want to integrate
f(x) = exp(-x^2/2)
from x=-infinity to x=+infinity
by using the Monte Carlo method. I use the function randn() to generate all x_i for the function f(x_i) = exp(-x_i^2/2) I want to integrate to calculate afterwards the mean value of f([x_1,..x_n]). My problem is, that the result depends on what values I choose for my borders x1 and x2 (see below). My result is going far away from the real value by increasing the value of x1 and x2. Actually the result should be better and better by increasing x1 and x2.
Does someone see my mistake?
Here is my Matlab code
clear all;
b=10; % border
x1 = -b; % left border
x2 = b; % right border
n = 10^6; % number of random numbers
x = randn(n,1);
f = ones(n,1);
g = exp(-(x.^2)/2);
F = ((x2-x1)/n)*f'*g;
The right value should be ~2.5066.
Thanks
Try this:
clear all;
b=10; % border
x1 = -b; % left border
x2 = b; % right border
n = 10^6; % number of random numbers
x = sort(abs(x1 - x2) * rand(n,1) + x1);
f = exp(-x.^2/2);
F = trapz(x,f)
F =
2.5066
Ok, lets start with writing of general case of MC integration:
I = S f(x) * p(x) dx, x in [a...b]
S here is integral sign.
Usually, p(x) is normalized probability density function, f(x) you want to integrate, and algorithm is very simple one:
set accumulator s to zero
start loop of N events
sample x randomly from p(x)
given x, compute f(x) and add to accumulator
back to start loop if not done
if done, divide accumulator by N and return it
In simplest textbook case you have
I = S f(x) dx, x in [a...b]
where it means PDF is equal to uniformly distributed one
p(x) = 1/(b-a)
but what you have to sum is actually (b-a)*f(x), because your integral now looks like
I = S (b-a)*f(x) 1/(b-a) dx, x in [a...b]
In general, if both f(x) and p(x) could serve as PDF, then it is matter of choice whether you integrate f(x) over p(x), or p(x) over f(x). No difference! (Well, except maybe computation time)
So, back to particular integral (which is equal to \sqrt{2\pi}, i believe)
I = S exp(-x^2/2) dx, x in [-infinity...infinity]
You could use more traditional approach like #Agriculturist and write it
I = S exp(-x^2/2)*(2a) 1/(2a) dx, x in [-a...a]
and sample x from U(0,1) in [-a...a] interval, and for each x compute exp() and average it and get the result
From what I understand, you want to use exp() as PDF, so your integral looks like
I = S D * exp(-x^2/2)/D dx, x in [-infinity...infinity]
PDF to be normalized so it shall include normalization factor D, which is exactly equal to \sqrt{2 \pi} from gaussian integral.
Now f(x) is just a constant equal to D. It doesn't depend on x. It means that you for each sampled x should add to accumulator a CONSTANT value of D. After running N samples,
in accumulator you'll have exactly N*D. To find mean you'll divide by N and as a result you'll get perfect D, which is \sqrt{2 \pi}, which, in turn, is
2.5066.
Too rusty to write any matlab, and Happy New Year anyway
My problem has 60 variables (x1 to x60) and here is the function:
f=(x1+x2+x3)*x1+(x2+x3+x4)*x2+...+(x58+x59+x60)*x58
I want to get the Hessian matrix of the function f. However, because there are so many variables, I don't want to write them one by one for syms and f.
I know I can manually calculate the Hessian matrix of the function f as the function is not too difficult. However, I occasionally need to change the form of the function, such as changing the function to (increase the number of variables in the brackets):
f=(x1+x2+x3+x4)*x1+(x2+x3+x4+x5)*x2+...+(x57+x58+x59+x60)*x57
Therefore, I don't want to manually compute the Hessian matrix of function f as long as the function form changes. Is there any easier way to use syms to write f with these 60 variables in MATLAB so that I can use hessian to get the Hessian matrix of f?
First, given the regular and simple nature of the function f described, your Hessian has a defined structure that can be directly calculated numerically. Like this, for example:
n = 60; % number of variables
b = 3; % number of terms in parentheses
h = diag(2+zeros(n,1));
for i = 1:b-1
d = diag(ones(n-i,1),i);
h = h+d+d.';
end
h(n-b+2:n,n-b+2:n) = 0
This can be done without a for loop via something like this:
n = 60; % number of variables
b = 3; % number of terms in parentheses
h = full(spdiags(repmat(ones(n,1),1,2*b-1),1-b:b-1,n,n)+speye(n));
h(n-b+2:n,n-b+2:n) = 0
Symbolically, you can create a vector of variables with sym to create your function and calculate the Hessian like this:
n = 60; % number of variables
b = 3; % number of terms in parentheses
x = sym('x',[n 1]); % vector of variables
f = 0;
for i = 1:n-b+1
f = f+sum(x(i:i+b-1))*x(i);
end
h = hessian(f,x)
It's possible to remove the for loops, but there won't be much performance benefit.