I am working on a problem involves my using the Euler Method to approximate the differential equation df/dt= af(t)−b[f(t)]^2, both when b=0 and when b is not zero; and I am to compare the analytic solution to the approximate solution when b=0.
f(1) = 1000;
t(1)= 0;
a = 10;
b = 0 ;
dt = 0.01;
Nsteps = 10/dt;
for i = 2:Nsteps
t(i) = dt + t(i-1);
%f(i) = f(i-1)*(1 + dt*(a - b*f(i-1)));
f(i) = f(i-1)*(1 + a*dt);
end
plot(t,f,'r-')
hold on
fa= a*exp(a*t)
plot(t,fa,'bo')
When b=0, the solution to the differential equation is f(t)=c*exp(at). When I apply the initial condition, that f(0) = 1000, then the differential equation becomes f(t)=1000*exp(at). Now, my professor said that a differential equation has an analytic solution, no matter what time step you use, the graph of analytic solution and the approximation (Euler's Method) will coincide. So, I expected the two graphs to overlap. I attached a picture of what I got.
Why did this occur? In order to get the graphs to overlap, I changed 1000 to 10, which is a=10, just for the heck of it. When I did this, the two overlapped. I don't understand. What am I doing incorrectly?
Why should the numerical solution give the same answer as the analytical one? Looking at pixels overlapping on the screen is not a very precise way to discern anything. You should examine the error between the two (absolute and/or relative). You might also want to examine what happens when you change the step size. And you might want to play with a linear system as well. You don't need to integrate out so far to see these effects – just setting t equal 0.1 or 1 suffices. Here is some better-formatted code to work with:
t0 = 0;
dt = 0.01;
tf = 0.1;
t = t0:dt:tf; % No need to integrate t in for loop for fixed time step
lt = length(t);
f = zeros(1,lt); % Pre-allocate f
f0 = 1000; % Initial condition
f(1) = f0;
a = 10;
for i = 1:lt-1
f(i+1) = f(i) + a*f(i)*dt;
%f(i+1) = f(i) + a*dt; % Alternative linear system to try
end
% Analytic solution
fa = f0*exp(a*t);
%fa = f0+a*t; % Alternative linear system to try
figure;
plot(t,f,'r-',t,fa,'bo')
% Plot absolute error
figure;
plot(t,abs(f-fa))
% Plot relative error
figure;
plot(t,abs(f-fa)./fa)
You're also not preallocating any of your arrays which makes you code very inefficient. My code does. Read about that here.
Much more beyond this is really off-topic for this site, which is focussed on programming rather than mathematics. If you really have questions about the numerical details that aren't answered by reading your text book (or the Wikipedia page for the Euler method) then you should ask them at Math.StackExchange.
Numerical methods are not precise and there is always an error between numerical and analytical solution. As Euler's method is first order method, global truncation error is proportional to step of integration step.
Related
I want to generate a random series x with length N through the following rule related to non-central chi-square distribution:
xn+1~χν2(λxn)
where ν is a given constant representing degrees of freedom, λ is also pre-specified and the multiplication of λ and xn is the non-centrality parameter, x1 is supposed to be given.
I wrote the following code to generate such sequence and time the running with x1=0.04, ν=0.005, λ=100 and N=1e5:
tic;
N = 1e5;
x = zeros(1,N);
x(1) = 0.04;
nu = 0.005;
lambda = 100;
for i = 1:N-1
x(i+1) = ncx2rnd(nu,lambda*x(i));
end
toc;
To illustrate my question, I have tested another example, which is different from above. Here I considered generating N=1e5 samples from the distribution χν2(λ) with ν=0.005 and λ=100:
tic;
N = 1e5;
x = zeros(1,N);
nu = 0.005;
lambda = 100;
for i = 1:N
x(i) = ncx2rnd(nu,lambda);
end
toc;
tic;
N = 1e5;
nu = 0.005;
lambda = 100;
x = ncx2rnd(nu,lambda*ones(1,N));
toc;
These two approaches work equivalently. However, it turns out that the second approach which does not use for-loop is much faster than the first one. The difference between both examples is, in the second example, the rule to generate some sample does not require the information of previous samples, which is not the case in the first, therefore all samples can be generated simultaneously without using for-loop. Based on this I wonder whether avoiding for-loop would accelerate the code execution. So would there be any MATLAB built-in function to generate random series shown in the first example without using for-loop when the rule of dependence on previous samples is explicit? If the rule is linear I know the function filter would be a possible choice, what about cases like the first example?
Logically it's impossible to calculate something iterative without doing the iterations. If x(n+1) is dependent on x(n) then you must calculate x(n) first, there is no "clever trick" here.
That just leaves us to optimise the calculation within the loop, specifically ncx2rnd. As with most MATLAB in-built functions, it is already fairly concise and performant, but there are some things to consider. Note that what I'm about to suggest involves using edit ncx2nrd to look inside this in-built function which contains code under MathWorks copyright, I'm simply noting observations about how it works.
There are some input checks to handle incorrectly sized inputs and/or inputs with negative values. If you can take the burden of validation on yourself (i.e. you know your inputs are valid) then you can reduce the function to its single mathematical operation:
% function r = ncx2rnd(v,delta)
r = 2.*randg(poissrnd(delta./2, sizeOut)) + 2.*randg(v./2,sizeOut);
Running this standalone saves around 20% of the processing time, which was for input validation (with a nominal N=1e5).
In the MathWorks syntax, delta is equal to your lambda*x(i), the other term including v is independent of your x, so you could compute it outside of the loop, i.e. vectorising one of the calls to randg. Again using N=1e5 this brings the total time saving to around 25%.
The result would mean this change to your example:
% Common inputs
N = 1e5;
nu = 0.1;
lambda = 0.1;
% Baseline example
x = zeros(1,N);
x(1) = 0.04;
for i = 1:N-1
x(i+1) = ncx2rnd(nu,lambda*x(i));
end
% ~25% faster alternative, with no input validation and partially vectorised
x = zeros(1,N);
x(1) = 0.04;
vTerm = 2.*randg(nu./2, [1,N]);
for i = 1:N-1
x(i+1) = 2.*randg(poissrnd(lambda*x(i)./2, [1,1])) + vTerm(i);
end
I am trying to fit histogram data that seem to follow a poisson distribution. I declare the function as follows and try to fit it by using the least squares method.
xdata; ydata; % Arrays in which I have stored the data.
%Ydata tell us how many times the xdata is repeated in the set.
fun= #(x,xdata) (exp(-x(1))*(x(1).^(xdata)) )./(factorial(xdata)) %Function I
% want to use in the fit. It is a poisson distribution.
x0=[60]; %Approximated value of the parameter lambda to help the fit
p=lsqcurvefit(fun,x0,xdata,ydata); % Fit in the least square sense
I however encounter the next problem
Error using snls (line 48)
Objective function is returning undefined values at initial point.
lsqcurvefit cannot continue.
I have seen online that it sometimes had to do with a division by zero for example. This can be solved by adding a small amount in the denominator so that that indetermination never happens. However, that is not my case. What is the problem then?
I implemented both methods (Maximum Likelihood and PDF Curve Fitting).
You can see the code in my Stack Overflow Q45118312 Github Repository.
Results:
As you can see, the Maximum Likelihood is simpler and better (MSE Wise).
So you have no reason to use the PDF Curve Fitting method.
Part of the code which does the heavy lifting is:
%% Simulation Parameters
numTests = 50;
numSamples = 1000;
paramLambdaBound = 10;
epsVal = 1e-6;
hPoissonPmf = #(paramLambda, vParamK) ((paramLambda .^ vParamK) * exp(-paramLambda)) ./ factorial(vParamK);
for ii = 1:ceil(1000 * paramLambdaBound)
if(hPoissonPmf(paramLambdaBound, ii) <= epsVal)
break;
end
end
vValGrid = [0:ii];
vValGrid = vValGrid(:);
vParamLambda = zeros([numTests, 1]);
vParamLambdaMl = zeros([numTests, 1]); %<! Maximum Likelihood
vParamLambdaCf = zeros([numTests, 1]); %<! Curve Fitting
%% Generate Data and Samples
for ii = 1:numTests
paramLambda = paramLambdaBound * rand([1, 1]);
vDataSamples = poissrnd(paramLambda, [numSamples, 1]);
vDataHist = histcounts(vDataSamples, [vValGrid - 0.5; vValGrid(end) + 0.5]) / numSamples;
vDataHist = vDataHist(:);
vParamLambda(ii) = paramLambda;
vParamLambdaMl(ii) = mean(vDataSamples); %<! Maximum Likelihood
vParamLambdaCf(ii) = lsqcurvefit(hPoissonPmf, 2, vValGrid, vDataHist, 0, inf); %<! Curve Fitting
end
This is the wrong way to do so.
You have data you believe acts according to Poisson Distribution.
Since the Poisson Distribution is parameterized by single parameter (Lambda) then what you need to do is apply Parameter Estimation.
The classic way to do so is by Maximum Likelihood Estimation.
For this case, Poisson Distribution, you need to follow the MLE of Poisson Distribution.
Namely, just calculate the sample mean of data as can be seen in poissfit().
I have this non-linear process that sort of looks like GBM, but is not because of the square-root noise. Both Mu's are constants, and l (in front of one Mu, and sigma) is a parameter. Sigma is a constant too. N is a population that increases.
This is not easily solved analytically.
Ultimately, I'm interested in starting off a bunch of these guys in Matlab with "continous" time steps, vary the parameter l for each process, and see what that looks like.
Since I've never done anything with SDE's in Matlab, I am a little lost. I've had a look at the different SDE solvers, but I can't seem to make them work. As said, I'm not hoping to solve anything, just manipulate different population sizes, time, and this parameter l.
Anyone who can point me in the right direction?
Based on Desmond J. Higham I ended up with this, sort of ugly looking approach. It's quite slow. If anyone had any suggestions as to how I would vectorize or include a SDE solver, such as SDETools to simulate it faster, I would be very grateful.
clear;
clc;
clf
T = 35;
N = 2^12;
Delta = T/N;
lambda = 0.1;
sigma = 4;
Xzero = 1;
P = 500;
Xem = zeros(1,N+1);
Xem(1) = Xzero;
for i = 1:P
for j = 1:N
if log(Xem(j)) < 0
Xem(j) = nan;
end
Winc = sqrt(Delta)*randn;
Xem(j+1) = Xem(j) + lambda*Delta*Xem(j) + sigma*sqrt(Xem(j))*Winc;
end
plot(0:Delta:T,log(Xem))
xlabel('t','FontSize',16), ylabel('X','FontSize',16)
hold on;
end
I have a set of odes written in matrix form as $X' = AX$; I also have a desired value of the states $X_des$. $X$ is a five dimensional vector. I want to stop the integration after all the states reach their desired values (or atleast close to them by $1e{-3}$). How do I use event function in matlab to do this? (All the help I have seen are about 1 dimensional states)
PS: I know for sure that all the states approach their desired values after long time. I just want to stop the integration once they are $1e{-3}$ within the desired values.
First, I presume that you're aware that you can use the matrix exponential (expm in Matlab) to solve your system of linear differential equations directly.
There are many ways to accomplish what you're trying to do. They all depend a bit on your system, how it behaves, and the particular event you want to capture. Here's a small example for a 2-by-2 system of linear differential equations:
function multipleeventsdemo
A = [-1 1;1 -2]; % Example A matrix
tspan = [0 50]; % Initial and final time
x0 = [1;1]; % Initial conditions
f = #(t,y)A*y; % ODE function
thresh = 0; % Threshold value
tol = 1e-3; % Tolerance on threshold
opts = odeset('Events',#(t,y)events(t,y,thresh,tol)); % Create events function
[t,y] = ode45(f,tspan,x0,opts); % Integrate with options
figure;
plot(t,y);
function [value,isterminal,direction] = events(t,y,thresh,tol)
value = y-thresh-tol;
isterminal = all(y-thresh-tol<=0)+zeros(size(y)); % Change termination condition
direction = -1;
Integration is stopped when both states are within tol of thresh. This is accomplished by adjusting the isterminal output of the events function. Note that separate tolerance and threshold variables isn't really necessary – you simply need to define the crossing value.
If your system oscillates as it approaches it's steady state (if A has complex eigenvalues), then you'll need to do more work. But you questions doesn't indicate this. And again, numerical integration may not be the easiest/best way to solve your problem which such a system. Here is how you could use expm in conjunction with a bit of symbolic math:
A = [-1 1;1 -2];
x0 = [1;1];
tol = 1e-3;
syms t_sym
y = simplify(expm(A*t_sym)*x0) % Y as a function of t
t0 = NaN(1,length(x0));
for i = 1:length(x0)
sol = double(solve(y(i)==tol,t_sym)) % Solve for t when y(i) equal to tol
if ~isempty(sol) % Could be no solution, then NaN
t0(i) = max(sol); % Or more than one solution, take largest
end
end
f = matlabFunction(y); % Create vectorized function of t
t_vec = linspace(0,max(t0),1e2); % Time vector
figure;
plot(t_vec,f(t_vec));
This will only work for fairly small A, however, because of the symbolic math. Numerical approaches using expm are also possible and likely more scalable.
So I have had a few posts the last few days about using MatLab to perform a convolution (see here). But I am having issues and just want to try and use the convolution property of Fourier Transforms. I have the code below:
width = 83.66;
x = linspace(-400,400,1000);
a2 = 1.205e+004 ;
al = 1.778e+005 ;
b1 = 94.88 ;
c1 = 224.3 ;
d = 4.077 ;
measured = al*exp(-((abs((x-b1)./c1).^d)))+a2;
%slit
rect = #(x) 0.5*(sign(x+0.5) - sign(x-0.5));
rt = rect(x/width);
subplot(5,1,1);plot(x,measured);title('imported data-super gaussian')
subplot(5,1,2);plot(x,(real(fftshift(fft(rt)))));title('transformed slit')
subplot(5,1,3);plot(x,rt);title('slit')
u = (fftshift(fft(measured)));
l = u./(real(fftshift(fft(rt))));
response = (fftshift(ifft(l)));
subplot(5,1,4);plot(x,real(response));title('response')
%Data Check
check = conv(rt,response,'full');
z = linspace(min(x),max(x),length(check));
subplot(5,1,5);plot(z,real(check));title('check')
My goal is to take my case, which is $measured = rt \ast signal$ and find signal. Once I find my signal, I convolve it with the rectangle and should get back measured, but I do not get that.
I have very little matlab experience, and pretty much 0 signal processing experience (working with DFTs). So any advice on how to do this would be greatly appreciated!
After considering the problem statement and woodchips' advice, I think we can get closer to a solution.
Input: u(t)
Output: y(t)
If we assume the system is causal and linear we would need to shift the rect function to occur before the response, like so:
rt = rect(((x+270+(83.66/2))/83.66));
figure; plot( x, measured, x, max(measured)*rt )
Next, consider the response to the input. It looks to me to be first order. If we assume as such, we will have a system transfer function in the frequency domain of the form:
H(s) = (b1*s + b0)/(s + a0)
You had been trying to use convolution to and FFT's to find the impulse response, "transfer function" in the time domain. However, the FFT of the rect, being a sinc has a zero crossing periodically. These zero points make using the FFT to identify the system extremely difficult. Due to:
Y(s)/U(s) = H(s)
So we have U(s) = A*sinc(a*s), with zeros, which makes the division go to infinity, which doesn't make sense for a real system.
Instead, let's attempt to fit coefficients to the frequency domain linear transfer function that we postulate is of order 1 since there are no overshoots, etc, 1st order is a reasonable place to start.
EDIT
I realized my first answer here had a unstable system description, sorry! The solution to the ODE is very stiff due to the rect function, so we need to crank down the maximum time step and use a stiff solver. However, this is still a tough problem to solve this way, a more analytical approach may be more tractable.
We use fminsearch to find the continuous time transfer function coefficients like:
function x = findTf(c0,u,y,t)
% minimize the error for the estimated
% parameters of the transfer function
% use a scaled version without an offset for the response, the
% scalars can be added back later without breaking the solution.
yo = (y - min(y))/max(y);
x = fminsearch(#(c) simSystem(c,u,y,t),c0);
end
% calculate the derivatives of the transfer function
% inputs and outputs using the estimated coefficient
% vector c
function out = simSystem(c,u,y,t)
% estimate the derivative of the input
du = diff([0; u])./diff([0; t]);
% estimate the second derivative of the input
d2u = diff([0; du])./diff([0; t]);
% find the output of the system, corresponds to measured
opt = odeset('MaxStep',mean(diff(t))/100);
[~,yp] = ode15s(#(tt,yy) odeFun(tt,yy,c,du,d2u,t),t,[y(1) u(1) 0],opt);
% find the error between the actual measured output and the output
% from the system with the estimated coefficients
out = sum((yp(:,1) - y).^2);
end
function dy = odeFun(t,y,c,du,d2u,tx)
dy = [c(1)*y(3)+c(2)*y(2)-c(3)*y(1);
interp1(tx,du,t);
interp1(tx,d2u,t)];
end
Something like that anyway should get you going.
x = findTf([1 1 1]',rt',measured',x');