Related
How do I solve the following system of equations on MATLAB when one of the elements of the variable vector is a constant? Please do give the code if possible.
More generally, if the solution is to use symbolic math, how will I go about generating large number of variables, say 12 (rather than just two) even before solving them?
For example, create a number of symbolic variables using syms, and then make the system of equations like below.
syms a1 a2
A = [matrix]
x = [1;a1;a2];
y = [1;0;0];
eqs = A*x == y
sol = solve(eqs,[a1, a2])
sol.a1
sol.a2
In case you have a system with many variables, you could define all the symbols using syms, and solve it like above.
You could also perform a parameter optimization with fminsearch. First you have to define a cost function, in a separate function file, in this example called cost_fcn.m.
function J = cost_fcn(p)
% make sure p is a vector
p = reshape(p, [length(p) 1]);
% system of equations, can be linear or nonlinear
A = magic(12); % your system, I took some arbitrary matrix
sol = A*p;
% the goal of the system of equations to reach, can be zero, or some other
% vector
goal = zeros(12,1);
% calculate the error
error = goal - sol;
% Use a cost criterion, e.g. sum of squares
J = sum(error.^2);
end
This cost function will contain your system of equations, and goal solution. This can be any kind of system. The vector p will contain the parameters that are being estimated, which will be optimized, starting from some initial guess. To do the optimization, you will have to create a script:
% initial guess, can be zeros, or some other starting point
p0 = zeros(12,1);
% do the parameter optimization
p = fminsearch(#cost_fcn, p0);
In this case p0 is the initial guess, which you provide to fminsearch. Then the values of this initial guess will be incremented, until a minimum to the cost function is found. When the parameter optimization is finished, p will contain the parameters that will result in the lowest error for your system of equations. It is however possible that this is a local minimum, if there is no exact solution to the problem.
Your system is over-constrained, meaning you have more equations than unknown, so you can't solve it. What you can do is find a least square solution, using mldivide. First re-arrange your equations so that you have all the constant terms on the right side of the equal sign, then use mldivide:
>> A = [0.0297 -1.7796; 2.2749 0.0297; 0.0297 2.2749]
A =
0.029700 -1.779600
2.274900 0.029700
0.029700 2.274900
>> b = [1-2.2749; -0.0297; 1.7796]
b =
-1.274900
-0.029700
1.779600
>> A\b
ans =
-0.022191
0.757299
Consider the following Matlab code in which I generate some data using pseudo random number generator.
I would like your help to understand "how" random are these numbers from a statistical point of view, in the terms I explain below.
I first set some parameters
%%%%%%%%Parameters
clear
rng default
Xsup=-1:6;
Zsup=1:10;
n_m=200;
n_w=200;
R=n_m;
Then I generate the data
%%%%%%%%Creation of data [XZ,etapair,zetapair,etasingle,zetasingle]
%Vector X of dimension n_mx1
idX=randi(size(Xsup,2),n_m,1); %n_mx1
X=Xsup(idX).'; %n_mx1
%Vector Z of dimension n_wx1
idZ=randi(size(Zsup,2),n_w,1);
Z=Zsup(idZ).'; %n_wx1
%Combine X and Z in a matrix XZ of dimension (n_m*n_w)x2
which lists all possible combinations of values in X and Z
[cX, cZ] = ndgrid(X,Z);
XZ = [cX(:), cZ(:)]; %(n_m*n_w)x2
%Vector etapair of dimension (n_m*n_w)x1
etapair=randn(n_m*n_w,1); %(n_m*n_w)x1
%Vector zetapair of dimension (n_m*n_w)x1
zetapair=randn(n_m*n_w,1); %(n_m*n_w)x1
%Vector etasingle of dimension (n_m*n_w)x1
etasingle=max(randn(n_m,R),[],2); %n_mx1
etasingle=repmat(etasingle, n_w,1); %(n_m*n_w)x1
%Vector zetasingle of dimension (n_m*n_w)x1
zetasingle=max(randn(n_w,R),[],2); %n_wx1
zetasingle=kron(zetasingle, ones(n_m,1)); %(n_m*n_w)x1
Let me now translate these draws into statistical terms:
For t=1,...,n_w*n_m, X(t) can be thought as a realisation of a random variable X_t
For t=1,...,n_w*n_m, Z(t) can be thought as a realisation of a random variable Z_t
For t=1,...,n_w*n_m, etapair(t) can be thought as a realisation of a random variable E_t
For t=1,...,n_w*n_m, zetapair(t) can be thought as a realisation of a random variable Q_t
For t=1,...,n_w*n_m, etasingle(t) can be thought as a realisation of a random variable Y_t
For t=1,...,n_w*n_m, zetasingle(t) can be thought as a realisation of a random variable S_t
My belief was that the pseudo random number generator in Matlab allows to claim that
(X_1,X_2,..., Z_1,Z_2,...,E_1,E_2,..., Q_1,Q_2...,Y_1,Y_2,...,S_1,S_2,...) are mutually independent
as explained here
As a check of this hypothetical claim, I define W_t:=-E_t-Q_t+Y_t+S_t and empirically compute Pr(W_t<=1|X_t=5, Z_t=1)
If mutual independence holds, then Pr(W_t<=1|X_t=5, Z_t=1)=Pr(W_t<=1) and their empirical counterparts below, named option1 and option2, should be ALMOST the same.
%option 1
num1=zeros(n_m*n_w,1);
for h=1:n_m*n_w
if -etapair(h)-zetapair(h)+etasingle(h)+zetasingle(h)<=1 && XZ(h,1)==5 && XZ(h,2)==1
num1(h)=1;
end
end
den1=zeros(n_m*n_w,1);
for h=1:n_m*n_w
if XZ(h,1)==5 && XZ(h,2)==1
den1(h)=1;
end
end
option1=sum(num1)/sum(den1);
%option 2
num2=zeros(n_m*n_w,1);
for h=1:n_m*n_w
if -etapair(h)-zetapair(h)+etasingle(h)+zetasingle(h)<=1
num2(h)=1;
end
end
option2=sum(num2)/(n_m*n_w);
Question: the difference between option1 (=0.0021) and option2 (=0.0012) is referred to the "ALMOST" or I am doing something wrong?
By the very nature of observing random events, you cannot guarantee theoretically accurate results for a give empirical trial.
You have set rng default at the start of your script, which means you will always get the same result (option1 = 0.0021, option2 = 0.0012).
Running your script many times and averaging the results, we should approach theoretical accuracy.
kk = 10000;
option1 = zeros(kk, 1);
option2 = zeros(kk, 1);
for ii = 1:kk
% No need to use 'clear' here. If you were concerned
% for some reason, you could use 'clearvars -except kk option1 option2 ii'
% do not use 'rng default'. Use 'rng shuffle' if anything, but not necessary
Xsup = -1:6;
% ... all your other code
% replace 'option1=...' with 'option1(ii)=...'
% replace 'option2=...' with 'option2(ii)=...'
end
fprintf('Results:\nMean option1 = %f\nMean option2 = %f\n', mean(option1), mean(option2));
Results:
>> Mean option1 = 0.001461
>> Mean option2 = 0.001458
We can see these are the same to some degree of accuracy, which can be arbitrarily high if we run X trials (for large enough X). This is as expected for independent variables.
Note, if you have the parallel computing toolbox, this for loop can easily be swapped for a parfor, and you can run trials many times faster.
Summary: This question deals with the improvement of an algorithm for the computation of linear regression.
I have a 3D (dlMAT) array representing monochrome photographs of the same scene taken at different exposure times (the vector IT) . Mathematically, every vector along the 3rd dimension of dlMAT represents a separate linear regression problem that needs to be solved. The equation whose coefficients need to be estimated is of the form:
DL = R*IT^P, where DL and IT are obtained experimentally and R and P must be estimated.
The above equation can be transformed into a simple linear model after applying a logarithm:
log(DL) = log(R) + P*log(IT) => y = a + b*x
Presented below is the most "naive" way to solve this system of equations, which essentially involves iterating over all "3rd dimension vectors" and fitting a polynomial of order 1 to (IT,DL(ind1,ind2,:):
%// Define some nominal values:
R = 0.3;
IT = 600:600:3000;
P = 0.97;
%// Impose some believable spatial variations:
pMAT = 0.01*randn(3)+P;
rMAT = 0.1*randn(3)+R;
%// Generate "fake" observation data:
dlMAT = bsxfun(#times,rMAT,bsxfun(#power,permute(IT,[3,1,2]),pMAT));
%// Regression:
sol = cell(size(rMAT)); %// preallocation
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = polyfit(log(IT(:)),log(squeeze(dlMAT(ind1,ind2,:))),1);
end
end
fittedP = cellfun(#(x)x(1),sol); %// Estimate of pMAT
fittedR = cellfun(#(x)exp(x(2)),sol); %// Estimate of rMAT
The above approach seems like a good candidate for vectorization, since it does not utilize MATLAB's main strength that is MATrix operations. For this reason, it does not scale very well and takes much longer to execute than I think it should.
There exist alternative ways to perform this computation based on matrix division, as demonstrated here and here, which involve something like this:
sol = [ones(size(x)),log(x)]\log(y);
That is, appending a vector of 1s to the observations, followed by mldivide to solve the equation system.
The main challenge I'm facing is how to adapt my data to the algorithm (or vice versa).
Question #1: How can the matrix-division-based solution be extended to solve the problem presented above (and potentially replace the loops I am using)?
Question #2 (bonus): What is the principle behind this matrix-division-based solution?
The secret ingredient behind the solution that includes matrix division is the Vandermonde matrix. The question discusses a linear problem (linear regression), and those can always be formulated as a matrix problem, which \ (mldivide) can solve in a mean-square error senseā”. Such an algorithm, solving a similar problem, is demonstrated and explained in this answer.
Below is benchmarking code that compares the original solution with two alternatives suggested in chat1, 2 :
function regressionBenchmark(numEl)
clc
if nargin<1, numEl=10; end
%// Define some nominal values:
R = 5;
IT = 600:600:3000;
P = 0.97;
%// Impose some believable spatial variations:
pMAT = 0.01*randn(numEl)+P;
rMAT = 0.1*randn(numEl)+R;
%// Generate "fake" measurement data using the relation "DL = R*IT.^P"
dlMAT = bsxfun(#times,rMAT,bsxfun(#power,permute(IT,[3,1,2]),pMAT));
%% // Method1: loops + polyval
disp('-------------------------------Method 1: loops + polyval')
tic; [fR,fP] = method1(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
%% // Method2: loops + Vandermonde
disp('-------------------------------Method 2: loops + Vandermonde')
tic; [fR,fP] = method2(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
%% // Method3: vectorized Vandermonde
disp('-------------------------------Method 3: vectorized Vandermonde')
tic; [fR,fP] = method3(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
function [fittedR,fittedP] = method1(IT,dlMAT)
sol = cell(size(dlMAT,1),size(dlMAT,2));
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = polyfit(log(IT(:)),log(squeeze(dlMAT(ind1,ind2,:))),1);
end
end
fittedR = cellfun(#(x)exp(x(2)),sol);
fittedP = cellfun(#(x)x(1),sol);
function [fittedR,fittedP] = method2(IT,dlMAT)
sol = cell(size(dlMAT,1),size(dlMAT,2));
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = flipud([ones(numel(IT),1) log(IT(:))]\log(squeeze(dlMAT(ind1,ind2,:)))).'; %'
end
end
fittedR = cellfun(#(x)exp(x(2)),sol);
fittedP = cellfun(#(x)x(1),sol);
function [fittedR,fittedP] = method3(IT,dlMAT)
N = 1; %// Degree of polynomial
VM = bsxfun(#power, log(IT(:)), 0:N); %// Vandermonde matrix
result = fliplr((VM\log(reshape(dlMAT,[],size(dlMAT,3)).')).');
%// Compressed version:
%// result = fliplr(([ones(numel(IT),1) log(IT(:))]\log(reshape(dlMAT,[],size(dlMAT,3)).')).');
fittedR = exp(real(reshape(result(:,2),size(dlMAT,1),size(dlMAT,2))));
fittedP = real(reshape(result(:,1),size(dlMAT,1),size(dlMAT,2)));
The reason why method 2 can be vectorized into method 3 is essentially that matrix multiplication can be separated by the columns of the second matrix. If A*B produces matrix X, then by definition A*B(:,n) gives X(:,n) for any n. Moving A to the right-hand side with mldivide, this means that the divisions A\X(:,n) can be done in one go for all n with A\X. The same holds for an overdetermined system (linear regression problem), in which there is no exact solution in general, and mldivide finds the matrix that minimizes the mean-square error. In this case too, the operations A\X(:,n) (method 2) can be done in one go for all n with A\X (method 3).
The implications of improving the algorithm when increasing the size of dlMAT can be seen below:
For the case of 500*500 (or 2.5E5) elements, the speedup from Method 1 to Method 3 is about x3500!
It is also interesting to observe the output of profile (here, for the case of 500*500):
Method 1
Method 2
Method 3
From the above it is seen that rearranging the elements via squeeze and flipud takes up about half (!) of the runtime of Method 2. It is also seen that some time is lost on the conversion of the solution from cells to matrices.
Since the 3rd solution avoids all of these pitfalls, as well as the loops altogether (which mostly means re-evaluation of the script on every iteration) - it unsurprisingly results in a considerable speedup.
Notes:
There was very little difference between the "compressed" and the "explicit" versions of Method 3 in favor of the "explicit" version. For this reason it was not included in the comparison.
A solution was attempted where the inputs to Method 3 were gpuArray-ed. This did not provide improved performance (and even somewhat degradaed them), possibly due to wrong implementation, or the overhead associated with copying matrices back and forth between RAM and VRAM.
I first define some differential equations:
%% Definitions
% Constants
syms L R J Ke p
% Input
syms ud uq m
% Output
syms id iq ome theta
% Derivations
syms did diq dome dtheta
%% Equations
did=(ud/L)-(R/L)*id+ome*iq;
diq=(uq/L)-(R/L)*iq-ome*id-(Ke/L)*ome;
dome = (p/J)*((3/2)*p*Ke*iq-m);
dtheta = ome;
I'm trying to calculate R and L now. The input and output variables come from simulink:
idvalues = DQ_OUT.signals.values(:,1);
iqvalues = DQ_OUT.signals.values(:,2);
udvalues = UIdq.signals.values(:,1);
uqvalues = UIdq.signals.values(:,2);
% ... define some position in these arrays ...
% Define values for symbolic variables
id=idvalues(position);
ud=udvalues(position);
iq=iqvalues(position);
ome=iqvalues(position);
These are double. I then eval the first equation:
eval(did)
And I get this crap:
ans =
6002386699416615/(18014398509481984*L) - (846927175344863*R)/(1125899906842624*L) + 4168268387464377/9007199254740992
I was thinking that mathematics calculator like matlab won't bother you with variable types, but what I see here is definitely variable type problem - the actual values are less than 1:
Specifically:
id = 0.7522
ud = 0.3332
iq = 0.6803
ome = 0.6803
When doing symbolic calculations, Matlab uses rational numbers for small decimals. This prevents floating point numerical issues and keeps the results exact. However as you found, it makes the results harder to read.
Matlab also has a vpa (variable precision arithmetic) function, which is capable of keeping up to 2^(29)+1 digits (apparently) in calculations, which means Matlab doesn't need to stick to rational functions in order to maintain exact results.
Before viewing the output of a symbolic calculation, use vpa to convert rational numbers with large numerators/denominators to decimal expansions, by using, in your case, vpa(eval(did)).
For example, defining
syms a
b=0.75221
then a*b gives
>> a*b
ans =
(75221*a)/100000
but vpa(a*b) gives
>> vpa(a*b)
ans =
0.75221*a
I am trying to solve a system of differential equations by writing code in Matlab. I am posting on this forum, hoping that someone might be able to help me in some way.
I have a system of 10 coupled differential equations. It is a vector-host epidemic model, which captures the transmission of a disease between human population and insect population. Since it is a simple system of differential equations, I am using solvers (ode45) for non-stiff problem type.
There are 10 differential equations, each representing 10 different state variables. There are two functions which have the same system of 10 coupled ODEs. One is called NoEffects_derivative_6_15_2012.m which contains the original system of ODEs. The other function is called OnlyLethal_derivative_6_15_2012.m which contains the same system of ODEs with an increased withdrawal rate starting at time, gamma=32 %days and that withdrawal rate decays exponentially with time.
I use ode45 to solve both the systems, using the same initial conditions. Time vector is also the same for both systems, going from t0 to tfinal. The vector tspan contains the time values going from t0 to tfinal, each with a increment of 0.25 days, making a total of 157 time values.
The solution values are stored in matrices ye0 and yeL. Both these matrices contain 157 rows and 10 columns (for the 10 state variable values). When I compare the value of the 10th state variable, for the time=tfinal, in the matrix ye0 and yeL by plotting the difference, I find it to be becoming negative for some time values. (using the command: plot(te0,ye0(:,10)-yeL(:,10))). This is not expected. For all time values from t0 till tfinal, the value of the 10 state variable, should be greater, as it is the solution obtained from a system of ODEs which did not have an increased withdrawal rate applied to it.
I am told that there is a bug in my matlab code. I am not sure how to find out that bug. Or maybe the solver in matlab I am using (ode45) is not efficient and does give this kind of problem. Can anyone help.
I have tried ode23 and ode113 as well, and yet get the same problem. The figure (2), shows a curve which becomes negative for time values 32 and 34 and this is showing a result which is not expected. This curve should have a positive value throughout, for all time values. Is there any other forum anyone can suggest ?
Here is the main script file:
clear memory; clear all;
global Nc capitalambda muh lambdah del1 del2 p eta alpha1 alpha2 muv lambdav global dims Q t0 tfinal gamma Ct0 b1 b2 Ct0r b3 H C m_tilda betaHV bitesPERlanding IC global tspan Hs Cs betaVH k landingARRAY muARRAY
Nhh=33898857; Nvv=2*Nhh; Nc=21571585; g=354; % number of public health centers in Bihar state %Fix human parameters capitalambda= 1547.02; muh=0.000046142; lambdah= 0.07; del1=0.001331871263014; del2=0.000288658; p=0.24; eta=0.0083; alpha1=0.044; alpha2=0.0217; %Fix vector parameters muv=0.071428; % UNIT:2.13 SANDFLIES DEAD/SAND FLY/MONTH, SOURCE: MUBAYI ET AL., 2010 lambdav=0.05; % UNIT:1.5 TRANSMISSIONS/MONTH, SOURCE: MUBAYI ET AL., 2010
Ct0=0.054;b1=0.0260;b2=0.0610; Ct0r=0.63;b3=0.0130;
dimsH=6; % AS THERE ARE FIVE HUMAN COMPARTMENTS dimsV=3; % AS THERE ARE TWO VECTOR COMPARTMENTS dims=dimsH+dimsV; % THE TOTAL NUMBER OF COMPARTMENTS OR DIFFERENTIAL EQUATIONS
gamma=32; % spraying is done of 1st feb of the year
Q=0.2554; H=7933615; C=5392890;
m_tilda=100000; % assumed value 6.5, later I will have to get it for sand flies or mosquitoes betaHV=66.67/1000000; % estimated value from the short technical report sent by Anuj bitesPERlanding=lambdah/(m_tilda*betaHV); betaVH=lambdav/bitesPERlanding; IC=zeros(dims+1,1); % CREATES A MATRIX WITH DIMS+1 ROWS AND 1 COLUMN WITH ALL ELEMENTS AS ZEROES
t0=1; tfinal=40; for j=t0:1:(tfinal*4-4) tspan(1)= t0; tspan(j+1)= tspan(j)+0.25; end clear j;
% INITIAL CONDITION OF HUMAN COMPARTMENTS q1=0.8; q2=0.02; q3=0.0005; q4=0.0015; IC(1,1) = q1*Nhh; IC(2,1) = q2*Nhh; IC(3,1) = q3*Nhh; IC(4,1) = q4*Nhh; IC(5,1) = (1-q1-q2-q3-q4)*Nhh; IC(6,1) = Nhh; % INTIAL CONDITIONS OF THE VECTOR COMPARTMENTS IC(7,1) = 0.95*Nvv; %80 PERCENT OF TOTAL ARE ASSUMED AS SUSCEPTIBLE VECTORS IC(8,1) = 0.05*Nvv; %20 PRECENT OF TOTAL ARE ASSUMED AS INFECTED VECTORS IC(9,1) = Nvv; IC(10,1)=0;
Hs=2000000; Cs=3000000; k=1; landingARRAY=zeros(tfinal*50,2); muARRAY=zeros(tfinal*50,2);
[te0 ye0]=ode45(#NoEffects_derivative_6_15_2012,tspan,IC); [teL yeL]=ode45(#OnlyLethal_derivative_6_15_2012,tspan,IC);
figure(1) subplot(4,3,1); plot(te0,ye0(:,1),'b-',teL,yeL(:,1),'r-'); xlabel('time'); ylabel('S'); legend('susceptible humans'); subplot(4,3,2); plot(te0,ye0(:,2),'b-',teL,yeL(:,2),'r-'); xlabel('time'); ylabel('I'); legend('Infectious Cases'); subplot(4,3,3); plot(te0,ye0(:,3),'b-',teL,yeL(:,3),'r-'); xlabel('time'); ylabel('G'); legend('Cases in Govt. Clinics'); subplot(4,3,4); plot(te0,ye0(:,4),'b-',teL,yeL(:,4),'r-'); xlabel('time'); ylabel('T'); legend('Cases in Private Clinics'); subplot(4,3,5); plot(te0,ye0(:,5),'b-',teL,yeL(:,5),'r-'); xlabel('time'); ylabel('R'); legend('Recovered Cases');
subplot(4,3,6);plot(te0,ye0(:,6),'b-',teL,yeL(:,6),'r-'); hold on; plot(teL,capitalambda/muh); xlabel('time'); ylabel('Nh'); legend('Nh versus time');hold off;
subplot(4,3,7); plot(te0,ye0(:,7),'b-',teL,yeL(:,7),'r-'); xlabel('time'); ylabel('X'); legend('Susceptible Vectors');
subplot(4,3,8); plot(te0,ye0(:,8),'b-',teL,yeL(:,8),'r-'); xlabel('time'); ylabel('Z'); legend('Infected Vectors');
subplot(4,3,9); plot(te0,ye0(:,9),'b-',teL,yeL(:,9),'r-'); xlabel('time'); ylabel('Nv'); legend('Nv versus time');
subplot(4,3,10);plot(te0,ye0(:,10),'b-',teL,yeL(:,10),'r-'); xlabel('time'); ylabel('FS'); legend('Total number of human infections');
figure(2) plot(te0,ye0(:,10)-yeL(:,10)); xlabel('time'); ylabel('FS(without intervention)-FS(with lethal effect)'); legend('Diff. bet. VL cases with and w/o intervention:ode45');
The function file: NoEffects_derivative_6_15_2012
function dx = NoEffects_derivative_6_15_2012( t , x )
global Nc capitalambda muh del1 del2 p eta alpha1 alpha2 muv global dims m_tilda betaHV bitesPERlanding betaVH
dx = zeros(dims+1,1); % t % dx
dx(1,1) = capitalambda-(m_tilda)*bitesPERlanding*betaHV*x(1,1)*x(8,1)/(x(7,1)+x(8,1))-muh*x(1,1);
dx(2,1) = (m_tilda)*bitesPERlanding*betaHV*x(1,1)*x(8,1)/(x(7,1)+x(8,1))-(del1+eta+muh)*x(2,1);
dx(3,1) = p*eta*x(2,1)-(del2+alpha1+muh)*x(3,1);
dx(4,1) = (1-p)*eta*x(2,1)-(del2+alpha2+muh)*x(4,1);
dx(5,1) = alpha1*x(3,1)+alpha2*x(4,1)-muh*x(5,1);
dx(6,1) = capitalambda -del1*x(2,1)-del2*x(3,1)-del2*x(4,1)-muh*x(6,1);
dx(7,1) = muv*(x(7,1)+x(8,1))-bitesPERlanding*betaVH*x(7,1)*x(2,1)/(x(6,1)+Nc)-muv*x(7,1);
%dx(8,1) = lambdav*x(7,1)*x(2,1)/(x(6,1)+Nc)-muvIOFt(t)*x(8,1);
dx(8,1) = bitesPERlanding*betaVH*x(7,1)*x(2,1)/(x(6,1)+Nc)-muv*x(8,1);
dx(9,1) = (muv-muv)*x(9,1);
dx(10,1) = (m_tilda)*bitesPERlanding*betaHV*x(1,1)*x(8,1)/x(9,1);
The function file: OnlyLethal_derivative_6_15_2012
function dx=OnlyLethal_derivative_6_15_2012(t,x)
global Nc capitalambda muh del1 del2 p eta alpha1 alpha2 muv global dims m_tilda betaHV bitesPERlanding betaVH k muARRAY
dx=zeros(dims+1,1);
% the below code saves some values into the second column of the two arrays % t muARRAY(k,1)=t; muARRAY(k,2)=artificialdeathrate1(t); k=k+1;
dx(1,1)= capitalambda-(m_tilda)*bitesPERlanding*betaHV*x(1,1)*x(8,1)/(x(7,1)+x(8,1))-muh*x(1,1);
dx(2,1)= (m_tilda)*bitesPERlanding*betaHV*x(1,1)*x(8,1)/(x(7,1)+x(8,1))-(del1+eta+muh)*x(2,1);
dx(3,1)=p*eta*x(2,1)-(del2+alpha1+muh)*x(3,1);
dx(4,1)=(1-p)*eta*x(2,1)-(del2+alpha2+muh)*x(4,1);
dx(5,1)=alpha1*x(3,1)+alpha2*x(4,1)-muh*x(5,1);
dx(6,1)=capitalambda -del1*x(2,1)-del2*( x(3,1)+x(4,1) ) - muh*x(6,1);
dx(7,1)=muv*( x(7,1)+x(8,1) )- bitesPERlanding*betaVH*x(7,1)*x(2,1)/(x(6,1)+Nc) - (artificialdeathrate1(t) + muv)*x(7,1);
dx(8,1)= bitesPERlanding*betaVH*x(7,1)*x(2,1)/(x(6,1)+Nc)-(artificialdeathrate1(t) + muv)*x(8,1);
dx(9,1)= -artificialdeathrate1(t) * x(9,1);
dx(10,1)= (m_tilda)*bitesPERlanding*betaHV*x(1,1)*x(8,1)/x(9,1);
The function file: artificialdeathrate1
function art1=artificialdeathrate1(t)
global Q Hs H Cs C
art1= Q*Hs*iOFt(t)/H + (1-Q)*Cs*oOFt(t)/C ;
The function file: iOFt
function i = iOFt(t)
global gamma tfinal Ct0 b1
if t>=gamma && t<=tfinal
i = Ct0*exp(-b1*(t-gamma));
else
i =0;
end
The function file: oOFt
function o = oOFt(t)
global gamma Ct0 b2 tfinal
if (t>=gamma && t<=tfinal)
o = Ct0*exp(-b2*(t-gamma));
else
o = 0;
end
If your working code is even remotely as messy as the code you posted, then that should IMHO the first thing you should address.
I cleaned up iOFt, oOFt a bit for you, since those were quite easy to handle. I tried my best at NoEffects_derivative_6_15_2012. What I'd personally change to your code is using decent indexes. You have 10 variables, there is no way that if you let your code rest for a few weeks or months, that you will remember what state 7 is for example. So instead of using (7,1), you might want to rewrite your ODE either using verbose names and then retrieving/storing them in the x and dx vectors. Or use indexes that make it clear what is happening.
E.g.
function ODE(t,x)
insectsInfected = x(1);
humansInfected = x(2);
%etc
dInsectsInfected = %some function of the rest
dHumansInfected = %some function of the rest
% etc
dx = [dInsectsInfected; dHumansInfected; ...];
or
function ODE(t,x)
iInsectsInfected = 1;
iHumansInfected = 2;
%etc
dx(iInsectsInfected) = %some function of x(i...)
dx(iHumansInfected) = %some function of x(i...)
%etc
When you don't do such things, you might end up using x(6,1) instead of e.g. x(3,1) in some formulas and it might take you hours to spot such a thing. If you use verbose names, it takes a bit longer to type, but it makes debugging a lot easier and if you understand your equations, it should be more obvious when such an error happens.
Also, don't hesitate to put spaces inside your formulas, it makes reading much easier. If you have some sub-expressions that are meaningful (e.g. if (1-p)*eta*x(2,1) is the number of insects that are dying of the disease, just put it in a variable dyingInsects and use that everywhere it occurs). If you align your assignments (as I've done above), this might add to code that is easier to read and understand.
With regard to the ODE solver, if you are sure your implementation is correct, I'd also try a solver for stiff problems (unless you are absolutely sure you don't have a stiff system).