Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
For instance: a, b, c and n are the three constants, which are required to be calculated by using data fitting method in a particular equation.
How can I calculate the statistics (mean, standard deviation, variance, skewness value and student t-test value) of the parameters as of a custom equation, for example the quadratic plateau equation?
Example:
x=[0,40,80,100,120,150,170,200],
y=[1865,2855,3608,4057,4343,4389,4415,4478]
y=a*(x+n)^2+b*(x+n)+c, x < xc(Ymax) ....(1) y=yp, x >= xc(Ymax) ....(2)
I have fitted this equation by given code:
yf = #(b,x) b(1).*(x+n).^2+b(2)*(x+n)+b(3); B0 = [0.006; 21; 1878];
[Bm,normresm] = fminsearch(#(b) norm(y - yf(b,x)), B0); a=Bm(1);
b=Bm(2); c=Bm(3); xc=(-b/(2*a))-n; p=p=a*(xc+n)^2+b*(xc+n)+c;
if (x < xc)
yfit = a.*(x+n).^2+ b*(x+n)+c;
else
yfit = p;
end
plot(x,yfit,'*')
hold on; plot(x,y); hold off
Note: I have already used the polyfit command, it was helpful and provided me the results. However, I really don’t find it suitable, as there is no option to customize the equation. Can I find these statistics by any code?
Questions 1, 2 and 4)
Good practice is to set initial values close to the final result if you have previous knowledge about the equation system:
What you have is an overdetermined system of linear equations.
y(1) = a*x(1)^2 + b*x(1) + c
y(2) = a*x(2)^2 + b*x(2) + c
y(3) = a*x(3)^2 + b*x(3) + c
…
y(n) = a*x(n)^2 + b*x(n) + c
or in general:
y = A*X, where
A = [a; b; c]
X = [x(1)^2 x(1) 1;
x(2)^2 x(2) 1;
x(3)^2 x(2) 1;
...
x(n)^2 x(n) 1]
One of the common practices to fit the overdetermined system (since it has no solution) is "least square fit" (mldivide,\ (link))
x=[0; 40; 80; 100; 120; 150; 170; 200];
y=[1865; 2855; 3608; 4057; 4343; 4389; 4415; 4478];
X = [x.^2 x ones(numel(x),1)];
A = y\X;
a0=A(1); %- initial value for a
b0=A(2); %- initial value for b
c0=A(3); %- initial value for c
You can customize equation, when you customize your X and A
but you also can set initial values to ones, it should have neglectable small impact on the result. More related to Question 4
a0=1;
b0=1;
c0=1;
or to random values
rng(10);
A = rand(3,1);
a0=A(1);
b0=A(2);
c0=A(3);
Question 3 - Statistics
If you need more control on monitoring of optimization process, use more general form of writing anonymous function (in code below> myfun) to save all intermediate values of parameters (a_iter, b_iter, c_iter)
function Fiting_ex()
global a_iter b_iter c_iter
a_iter = 0;
b_iter = 0;
c_iter = 0;
x=[0; 40; 80; 100; 120; 150; 170; 200];
y=[1865; 2855; 3608; 4057; 4343; 4389; 4415; 4478];
X = [x.^2 x ones(numel(x),1)];
A = y\X;
a0=A(1);
b0=A(2);
c0=A(3);
B0 = [a0; b0; c0];
[Bm,normresm] = fminsearch(#(b) myfun(b,x,y),B0);
a=Bm(1);
b=Bm(2);
c=Bm(3);
xc=-b/(2*a);
p=c-(b^2/(4*a));
yfit = zeros(numel(x),1);
for i=1:numel(x)
if (x(i) < xc)
yfit(i) = a.*x(i).^2+ b*x(i)+c;
else
yfit(i) = p;
end
end
plot(x,yfit,'*')
hold on;
plot(x,y);
hold off
% Statistic on optimization process
a_mean = mean(a_iter(2:end)); % mean value
a_var = var(a_iter(2:end)); % variance
a_std = std(a_iter(2:end)); % standard deviation
function f = myfun(Bm, x, y)
global a_iter b_iter c_iter
a_iter = [a_iter Bm(1)];
b_iter = [b_iter Bm(2)];
c_iter = [c_iter Bm(3)];
yf = Bm(1)*(x).^2+Bm(2)*(x)+Bm(3);
a=Bm(1);
b=Bm(2);
c=Bm(3);
xc=-b/(2*a);
p=c-(b^2/(4*a));
yfit = zeros(numel(x),1);
for i=1:numel(x)
if (x(i) < xc)
yfit(i) = a.*x(i).^2+ b*x(i)+c;
else
yfit(i) = p;
end
end
f = norm(y - yfit);
Related
I want to write a program that makes use of Newtons Method:
To estimate the x of this integral:
Where X is the total distance.
I have functions to calculate the Time it takes to arrive at a certain distance by using the trapezoid method for numerical integration. Without using trapz.
function T = time_to_destination(x, route, n)
h=(x-0)/n;
dx = 0:h:x;
y = (1./(velocity(dx,route)));
Xk = dx(2:end)-dx(1:end-1);
Yk = y(2:end)+y(1:end-1);
T = 0.5*sum(Xk.*Yk);
end
and it fetches its values for velocity, through ppval of a cubic spline interpolation between a set of data points. Where extrapolated values should not be fetcheable.
function [v] = velocity(x, route)
load(route);
if all(x >= distance_km(1))==1 & all(x <= distance_km(end))==1
estimation = spline(distance_km, speed_kmph);
v = ppval(estimation, x);
else
error('Bad input, please choose a new value')
end
end
Plot of the velocity spline if that's interesting to you evaluated at:
dx= 1:0.1:65
Now I want to write a function that can solve for distance travelled after a certain given time, using newton's method without fzero / fsolve . But I have no idea how to solve for the upper bound of a integral.
According to the fundamental theorem of calculus I suppose the derivative of the integral is the function inside the integral, which is what I've tried to recreate as Time_to_destination / (1/velocity)
I added the constant I want to solve for to time to destination so its
(Time_to_destination - (input time)) / (1/velocity)
Not sure if I'm doing that right.
EDIT: Rewrote my code, works better now but my stopcondition for Newton Raphson doesnt seem to converge to zero. I also tried to implement the error from the trapezoid integration ( ET ) but not sure if I should bother implementing that yet. Also find the route file in the bottom.
Stop condition and error calculation of Newton's Method:
Error estimation of trapezoid:
Function x = distance(T, route)
n=180
route='test.mat'
dGuess1 = 50;
dDistance = T;
i = 1;
condition = inf;
while condition >= 1e-4 && 300 >= i
i = i + 1 ;
dGuess2 = dGuess1 - (((time_to_destination(dGuess1, route,n))-dDistance)/(1/(velocity(dGuess1, route))))
if i >= 2
ET =(time_to_destination(dGuess1, route, n/2) - time_to_destination(dGuess1, route, n))/3;
condition = abs(dGuess2 - dGuess1)+ abs(ET);
end
dGuess1 = dGuess2;
end
x = dGuess2
Route file: https://drive.google.com/open?id=18GBhlkh5ZND1Ejh0Muyt1aMyK4E2XL3C
Observe that the Newton-Raphson method determines the roots of the function. I.e. you need to have a function f(x) such that f(x)=0 at the desired solution.
In this case you can define f as
f(x) = Time(x) - t
where t is the desired time. Then by the second fundamental theorem of calculus
f'(x) = 1/Velocity(x)
With these functions defined the implementation becomes quite straightforward!
First, we define a simple Newton-Raphson function which takes anonymous functions as arguments (f and f') as well as an initial guess x0.
function x = newton_method(f, df, x0)
MAX_ITER = 100;
EPSILON = 1e-5;
x = x0;
fx = f(x);
iter = 0;
while abs(fx) > EPSILON && iter <= MAX_ITER
x = x - fx / df(x);
fx = f(x);
iter = iter + 1;
end
end
Then we can invoke our function as follows
t_given = 0.3; % e.g. we want to determine distance after 0.3 hours.
n = 180;
route = 'test.mat';
f = #(x) time_to_destination(x, route, n) - t_given;
df = #(x) 1/velocity(x, route);
distance_guess = 50;
distance = newton_method(f, df, distance_guess);
Result
>> distance
distance = 25.5877
Also, I rewrote your time_to_destination and velocity functions as follows. This version of time_to_destination uses all the available data to make a more accurate estimate of the integral. Using these functions the method seems to converge faster.
function t = time_to_destination(x, d, v)
% x is scalar value of destination distance
% d and v are arrays containing measured distance and velocity
% Assumes d is strictly increasing and d(1) <= x <= d(end)
idx = d < x;
if ~any(idx)
t = 0;
return;
end
v1 = interp1(d, v, x);
t = trapz([d(idx); x], 1./[v(idx); v1]);
end
function v = velocity(x, d, v)
v = interp1(d, v, x);
end
Using these new functions requires that the definitions of the anonymous functions are changed slightly.
t_given = 0.3; % e.g. we want to determine distance after 0.3 hours.
load('test.mat');
f = #(x) time_to_destination(x, distance_km, speed_kmph) - t_given;
df = #(x) 1/velocity(x, distance_km, speed_kmph);
distance_guess = 50;
distance = newton_method(f, df, distance_guess);
Because the integral is estimated more accurately the solution is slightly different
>> distance
distance = 25.7771
Edit
The updated stopping condition can be implemented as a slight modification to the newton_method function. We shouldn't expect the trapezoid rule error to go to zero so I omit that.
function x = newton_method(f, df, x0)
MAX_ITER = 100;
TOL = 1e-5;
x = x0;
iter = 0;
dx = inf;
while dx > TOL && iter <= MAX_ITER
x_prev = x;
x = x - f(x) / df(x);
dx = abs(x - x_prev);
iter = iter + 1;
end
end
To check our answer we can plot the time vs. distance and make sure our estimate falls on the curve.
...
distance = newton_method(f, df, distance_guess);
load('test.mat');
t = zeros(size(distance_km));
for idx = 1:numel(distance_km)
t(idx) = time_to_destination(distance_km(idx), distance_km, speed_kmph);
end
plot(t, distance_km); hold on;
plot([t(1) t(end)], [distance distance], 'r');
plot([t_given t_given], [distance_km(1) distance_km(end)], 'r');
xlabel('time');
ylabel('distance');
axis tight;
One of the main issues with my code was that n was too low, the error of the trapezoidal sum, estimation of my integral, was too high for the newton raphson method to converge to a very small number.
Here was my final code for this problem:
function x = distance(T, route)
load(route)
n=10e6;
x = mean(distance_km);
i = 1;
maxiter=100;
tol= 5e-4;
condition=inf
fx = #(x) time_to_destination(x, route,n);
dfx = #(x) 1./velocity(x, route);
while condition > tol && i <= maxiter
i = i + 1 ;
Guess2 = x - ((fx(x) - T)/(dfx(x)))
condition = abs(Guess2 - x)
x = Guess2;
end
end
Introduction
I am using Matlab to simulate some dynamic systems through numerically solving systems of Second Order Ordinary Differential Equations using ODE45. I found a great tutorial from Mathworks (link for tutorial at end) on how to do this.
In the tutorial the system of equations is explicit in x and y as shown below:
x''=-D(y) * x' * sqrt(x'^2 + y'^2)
y''=-D(y) * y' * sqrt(x'^2 + y'^2) + g(y)
Both equations above have form y'' = f(x, x', y, y')
Question
However, I am coming across systems of equations where the variables can not be solved for explicitly as shown in the example. For example one of the systems has the following set of 3 second order ordinary differential equations:
y double prime equation
y'' - .5*L*(x''*sin(x) + x'^2*cos(x) + (k/m)*y - g = 0
x double prime equation
.33*L^2*x'' - .5*L*y''sin(x) - .33*L^2*C*cos(x) + .5*g*L*sin(x) = 0
A single prime is first derivative
A double prime is second derivative
L, g, m, k, and C are given parameters.
How can Matlab be used to numerically solve a set of second order ordinary differential equations where second order can not be explicitly solved for?
Thanks!
Your second system has the form
a11*x'' + a12*y'' = f1(x,y,x',y')
a21*x'' + a22*y'' = f2(x,y,x',y')
which you can solve as a linear system
[x'', y''] = A\f
or in this case explicitly using Cramer's rule
x'' = ( a22*f1 - a12*f2 ) / (a11*a22 - a12*a21)
y'' accordingly.
I would strongly recommend leaving the intermediate variables in the code to reduce chances for typing errors and avoid multiple computation of the same expressions.
Code could look like this (untested)
function dz = odefunc(t,z)
x=z(1); dx=z(2); y=z(3); dy=z(4);
A = [ [-.5*L*sin(x), 1] ; [.33*L^2, -0.5*L*sin(x)] ]
b = [ [dx^2*cos(x) + (k/m)*y-g]; [-.33*L^2*C*cos(x) + .5*g*L*sin(x)] ]
d2 = A\b
dz = [ dx, d2(1), dy, d2(2) ]
end
Yes your method is correct!
I post the following code below:
%Rotating Pendulum Sym Main
clc
clear all;
%Define parameters
global M K L g C;
M = 1;
K = 25.6;
L = 1;
C = 1;
g = 9.8;
% define initial values for theta, thetad, del, deld
e_0 = 1;
ed_0 = 0;
theta_0 = 0;
thetad_0 = .5;
initialValues = [e_0, ed_0, theta_0, thetad_0];
% Set a timespan
t_initial = 0;
t_final = 36;
dt = .01;
N = (t_final - t_initial)/dt;
timeSpan = linspace(t_final, t_initial, N);
% Run ode45 to get z (theta, thetad, del, deld)
[t, z] = ode45(#RotSpngHndl, timeSpan, initialValues);
%initialize variables
e = zeros(N,1);
ed = zeros(N,1);
theta = zeros(N,1);
thetad = zeros(N,1);
T = zeros(N,1);
V = zeros(N,1);
x = zeros(N,1);
y = zeros(N,1);
for i = 1:N
e(i) = z(i, 1);
ed(i) = z(i, 2);
theta(i) = z(i, 3);
thetad(i) = z(i, 4);
T(i) = .5*M*(ed(i)^2 + (1/3)*L^2*C*sin(theta(i)) + (1/3)*L^2*thetad(i)^2 - L*ed(i)*thetad(i)*sin(theta(i)));
V(i) = -M*g*(e(i) + .5*L*cos(theta(i)));
E(i) = T(i) + V(i);
end
figure(1)
plot(t, T,'r');
hold on;
plot(t, V,'b');
plot(t,E,'y');
title('Energy');
xlabel('time(sec)');
legend('Kinetic Energy', 'Potential Energy', 'Total Energy');
Here is function handle file for ode45:
function dz = RotSpngHndl(~, z)
% Define Global Parameters
global M K L g C
A = [1, -.5*L*sin(z(3));
-.5*L*sin(z(3)), (1/3)*L^2];
b = [.5*L*z(4)^2*cos(z(3)) - (K/M)*z(1) + g;
(1/3)*L^2*C*cos(z(3)) + .5*g*L*sin(z(3))];
X = A\b;
% return column vector [ed; edd; ed; edd]
dz = [z(2);
X(1);
z(4);
X(2)];
I'm trying to solve a system of ordinary differential equations in MATLAB.
I have a simple equation:
dy = -k/M *x - c/M *y+ F/M.
This is defined in my ode function test2.m, dependant on the values X and t. I want to trig 'F' with a signal, generated by my custom function squaresignal.m. The output hereof, is the variable u, spanding from 0 to 1, as it is a smooth heaviside function. - Think square wave. The inputs in squaresignal.m, is t and f.
u=squaresignal(t,f)
These values are to be used inside my function test2, in order to enable or disable variable 'F' with the value u==1 (enable). Disable for all other values.
My ode function test2.m reads:
function dX = test2(t ,X, u)
x = X (1) ;
y = X (2) ;
M = 10;
k = 50;
c = 10;
F = 300;
if u == 1
F = F;
else
F = 0,
end
dx = y ;
dy = -k/M *x - c/M *y+ F/M ;
dX = [ dx dy ]';
end
And my runscript reads:
clc
clear all
tstart = 0;
tend = 10;
tsteps = 0.01;
tspan = [0 10];
t = [tstart:tsteps:tend];
f = 2;
u = squaresignal(t,f)
for ii = 1:length(u)
options=odeset('maxstep',tsteps,'outputfcn',#odeplot);
[t,X]=ode15s(#(t,X)test2(t,X,u(ii)),[tstart tend],[0 0],u);
end
figure (1);
plot(t,X(:,1))
figure (2);
plot(t,X(:,2));
However, the for-loop does not seem to do it's magic. I still only get F=0, instead of F=F, at times when u==1. And i know, that u is equal to one at some times, because the output of squaresignal.m is visible to me.
So the real question is this. How do i properly pass my variable u, to my function test2.m, and use it there to trig F? Is it possible that the squaresignal.m should be inside the odefunction test2.m instead?
Here's an example where I pass a variable coeff to the differential equation:
function [T,Q] = main()
t_span = [0 10];
q0 = [0.1; 0.2]; % initial state
ode_options = odeset(); % currently no options... You could add some here
coeff = 0.3; % The parameter we wish to pass to the differential eq.
[T,Q] = ode15s(#(t,q)diffeq(t,q,coeff),t_span,q0, ode_options);
end
function dq = diffeq(t,q,coeff)
% Preallocate vector dq
dq = zeros(length(q),1);
% Update dq:
dq(1) = q(2);
dq(2) = -coeff*sin(q(1));
end
EDIT:
Could this be the problem?
tstart = 0;
tend = 10;
tsteps = 0.01;
tspan = [0 10];
t = [tstart:tsteps:tend];
f = 2;
u = squaresignal(t,f)
Here you create a time vector t which has nothing to do with the time vector returned by the ODE solver! This means that at first we have t[2]=0.01 but once you ran your ODE solver, t[2] can be anything. So yes, if you want to load an external signal source depending on time, then you need to call your squaresignal.m from within the differential equation and pass the solver's current time t! Your code should look like this (note that I'm passing f now as an additional argument to the diffeq):
function dX = test2(t ,X, f)
x = X (1) ;
y = X (2) ;
M = 10;
k = 50;
c = 10;
F = 300;
u = squaresignal(t,f)
if u == 1
F = F;
else
F = 0,
end
dx = y ;
dy = -k/M *x - c/M *y+ F/M ;
dX = [ dx dy ]';
end
Note however that matlab's ODE solvers do not like at all what you're doing here. You are drastically (i.e. non-smoothly) changing the dynamics of your system. What you should do is to use one of the following:
a) events if you want to trigger some behaviour (like termination) depending on the integrated variable x or
b) If you want to trigger the behaviour based on the time t, you should segment your integration into different parts where the differential equation does not vary during one segment. You can then resume your integration by using the current state and time as x0 and t0 for the next run of ode15s. Of course this only works of you're external signal source u is something simple like a step funcion or square wave. In case of the square wave you would only integrate for a timespan during which the wave does not jump. And then exactly at the time of the jump you start another integration with altered differential equations.
I was given a piece of Matlab code by a lecturer recently for a way to solve simultaneous equations using the Newton-Raphson method with a jacobian matrix (I've also left in his comments). However, although he's provided me with the basic code I cannot seem to get it working no matter how hard I try. I've spent many hours trying to introduce the 'func' function but to no avail, frequently getting the message that there aren't enough inputs. Any help would be greatly appreciated, especially with how to write the 'func' function.
function root = newtonRaphson2(func,x,tol)
% Newton-Raphson method of finding a root of simultaneous
% equations fi(x1,x2,...,xn) = 0, i = 1,2,...,n.
% USAGE: root = newtonRaphson2(func,x,tol)
% INPUT:
% func = handle of function that returns[f1,f2,...,fn].
% x = starting solution vector [x1,x2,...,xn].
% tol = error tolerance (default is 1.0e4*eps).
% OUTPUT:
% root = solution vector.
if size(x,1) == 1; x = x'; end % x must be column vector
for i = 1:30
[jac,f0] = jacobian(func,x);
if sqrt(dot(f0,f0)/length(x)) < tol
root = x; return
end
dx = jac\(-f0);
x = x + dx;
if sqrt(dot(dx,dx)/length(x)) < tol
root = x; return
end
end
error('Too many iterations')
function [jac,f0] = jacobian(func,x)
% Returns the Jacobian matrix and f(x).
h = 1.0e-4;
n = length(x);
jac = zeros(n);
f0 = feval(func,x);
for i =1:n
temp = x(i);
x(i) = temp + h;
f1 = feval(func,x);
x(i) = temp;
jac(:,i) = (f1 - f0)/h;
end
The simultaneous equations to be solved are:
sin(x)+y^2+ln(z)-7=0
3x+2^y-z^3+1=0
x+y+Z-=0
with the starting point (1,1,1).
However, these are arbitrary and can be replaced with anything, I mainly just need to know the general format.
Many thanks, I know this may be a very simple task but I've only recently started teaching myself Matlab.
You need to create a new file called myfunc.m (or whatever name you like) which takes a single input parameter - a column vector x - and returns a single output vector - a column vector y such that y = f(x).
For example,
function y = myfunc(x)
y = zeros(3, 1);
y(1) = sin(x(1)) + x(2)^2 + log(x(3)) - 7;
y(2) = 3*x(1) + 2^x(2) - x(3)^3 + 1;
y(3) = x(1) + x(2) + x(3);
end
You can then refer to this function as #myfunc as in
>> newtonRaphson2(#myfunc, [1;1;1], 1e-6);
The reason for the special notation is that Matlab allows you to call a function with no parameters by omitting the parens () that follow it. So for example, Matlab interprets myfunc as you calling the function with no arguments (so it tries to replace it with its result) whereas #myfunc refers to the function itself, rather than its result.
Alternatively you can write a function directly using the # notation, as in
>> newtonRaphson2(#(x) exp(x) - 3*x, 2, 1e-2)
ans =
1.5315
>> newtonRaphson2(#(x) exp(x) - 3*x, 1, 1e-2)
ans =
0.6190
which are the two roots of the equation exp(x) - 3 * x = 0.
Edit - as an aside, your professor has terrible coding style (if the code in your question is a direct copy-paste of what he gave you, and you haven't mangled it along the way). It would be better to write the code like this, with indentation making it clear what the structure of the code is.
function root = newtonRaphson2(func, x, tol)
% Newton-Raphson method of finding a root of simultaneous
% equations fi(x1,x2,...,xn) = 0, i = 1,2,...,n.
%
% USAGE: root = newtonRaphson2(func,x,tol)
%
% INPUT:
% func = handle of function that returns[f1,f2,...,fn].
% x = starting solution vector [x1,x2,...,xn].
% tol = error tolerance (default is 1.0e4*eps).
%
% OUTPUT:
% root = solution vector.
if size(x, 1) == 1; % x must be column vector
x = x';
end
for i = 1:30
[jac, f0] = jacobian(func, x);
if sqrt(dot(f0, f0) / length(x)) < tol
root = x; return
end
dx = jac \ (-f0);
x = x + dx;
if sqrt(dot(dx, dx) / length(x)) < tol
root = x; return
end
end
error('Too many iterations')
end
function [jac, f0] = jacobian(func,x)
% Returns the Jacobian matrix and f(x).
h = 1.0e-4;
n = length(x);
jac = zeros(n);
f0 = feval(func,x);
for i = 1:n
temp = x(i);
x(i) = temp + h;
f1 = feval(func,x);
x(i) = temp;
jac(:,i) = (f1 - f0)/h;
end
end
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
A nonlinear code
% This produces a black waterfall plot of the solution of
% u_t + 6u^2u_x + u_xxx - (b(x,t)u)_x = 0 on [-pi,pi]
% and returns the the time vector to use for efft.m
% The code is a simple modification of Trefethen's example p27.m
% The initial data is given by a double soliton with parameters
% a1, a2 (centers) and c1, c2 (velocities).
% The arguments of mkdvb.m are time t, parameters given in a vector form
% [a1,a2,c1,c2], a function bb = b(x,t), and the log_2 of the number of
% x grid points (the default value is 8).
% It return the time vector T, the position vector X, and the matrix of
% solution values at times given by T.
%
% Here is an example
%
% T = mkdvB(#(x,t) 100*cos(x-10^3*t).^2-50*sin(2*x+10^3*t), 0.05, [1,-1,4,-11],9)
%
% The time vector T can then be entered into effdyn.m code to compare the
% solution to the effective dynamics:
%
% [T,Y]=effdyn(#(x,t) 100*cos(x-10^3*t).^2-50*sin(2*x+10^3*t), T, [1,-1,4,-11])
%
function [T,X,U] = mkdvb2(fun,tmax,ZZ,N)
if (nargin < 2 )
tmax=0.065;
end
if (nargin < 3)
ZZ = [-1,-2,-6,9];
end
if (nargin < 4)
N = 8;
end
N = 2^N;
% Set up grid and two-soliton initial data:
dt = .4/N^2; x = (2*pi/N)*(-N/2:N/2-1)';
drawnow, set(gcf,'renderer','zbuffer')
u=sqrt(2)*qu(x,ZZ);
v = fft(u); k = [0:N/2-1 0 -N/2+1:-1]'; ik3 = 1i*k.^3;
% Solve PDE and plot results:
nplt = floor((tmax/25)/dt); nmax = round(tmax/dt);
udata = u; tdata = 0; h = waitbar(0,'please wait...');
for n = 1:nmax %#ok<ALIGN>
t = n*dt; g = -1i*dt*k;
qz=fun(x,t);
E = exp(dt*ik3/2); E2 = E.^2;
a = g.*fft(real( ifft( v ) ).^3 - ifft( v ).*qz);
b = g.*fft(real( ifft(E.*(v+a/2)) ).^3 - ifft(E.*(v+a/2)).*qz);
c = g.*fft(real( ifft(E.*v + b/2) ).^3 - ifft(E.*v + b/2).*qz);
d = g.*fft(real( ifft(E2.*v+E.*c) ).^3 - ifft(E2.*v+E.*c).*qz);
v = E2.*v + (E2.*a + 2*E.*(b+c) + d)/6;
if mod(n,nplt) == 0
u = real(ifft(v)); waitbar(n/nmax)
udata = [udata u]; tdata = [tdata t];
end
end
udata = udata/sqrt(2);
hold on
[~,P] = size(tdata);
z1=0;
z2=0;
for j=1:P
plot3(x,tdata(j)+0*x, udata(:,j) );
z1=max(z1,max(udata(:,j)));
z2=min(z1,min(udata(:,j)));
end
T = tdata;
U = udata;
X = x;
view(0,60)
xlabel x, ylabel t, axis([-pi pi 0 tmax z2 z1]), grid off
close(h)
function qu = qu(x,ZZ)
k1 = ZZ(3)*(x - ZZ(1));
k2 = ZZ(4)*(x - ZZ(2));
ga1 = exp(-k1);
ga2 = - exp(-k2);
m11=(1+ga1.^2)./(2*ZZ(3));
m22=(1+ga2.^2)./(2*ZZ(4));
m12=(1+ga1.*ga2)./(ZZ(3)+ZZ(4));
M=m11.*m22 - m12.^2;
M1=ga1.*(m12 - m22) + ga2.*(m12-m11);
qu = M1./M;
The code is a bit messy, but fortunately this kind of problem is very easy to troubleshoot.
Save your code in a function or script
turn on dbstop if error
See where the error occurs
You should now be on a line like
[y1, y2] = f(x1,x2,x3)
Try to evaluate all input arguments, at least 1 of them will not exist (yet?).