How do i run my M-file serially while changing a particular variable? - matlab

I want to carry out a sensitivity study for my Matlab script. The script has 3 different variables (velocity, searchVolume1, and searchVolume2) whose values are to be changed such that: both velocity and searchVolume2 are kept constant while searchVolume1 is being changed from say 0.5, 2.4, 3.7, 4.6, 5.1, etc. The procedure is repeated for next values of velocity (say: 10, 20, 30, 50, 70, etc) and afterwards, searchVolume2 is then also changed to different values (say: 1,2,3,4,5,etc). There are three output variables (t1, t2, and t3) for each run, but these come in the form of a distribution.
Now, I have made my script as a function (myMfile_sensitivity) with the three outputs (t1, t2, and t3) and I have another script where I would need to call myMfile_sensitivity. However due to the nature of the complexity, I have decided to only alter searchVolume1 and then use several computers for different velocities - after which I will repeat the runs for other values of searchVolume2.
I have re-edited my M-file to look this way:
Velocity = 10
searchVolume2 = 1
searchVolume1 = 0.5
% Pre-allocate storage for t1, t2, and t3
t1 = NaN(numel(X), kk)
t2 = NaN(numel(X), kk)
t3 = NaN(numel(X), kk)
For ii = 1: numel(X) %where X is the no. of data points for the distribution
Do….
Do….
While volume(ii,:) < searchVolume1(kk,:)
Do…
Do…
End
Do…
Do…
t1 = ….
t2 = ….
End
In my new script, I have:
searchVolume1 = [0.5, 0.5, 2.4, 3.7, 4.6, 5.1]';
For kk = 1: numel(searchVolume1)
[t1,t2,t3] = myMfile_sensitivity(searchVolume, kk)
End
The file runs but I end up seeing only the result for the last searchVolume1. How do I store all the outputs? Many thanks in advance guys!

Related

Solve system of differential equation in python

I'm trying to solve a system of differential equations in python.
I have a system composed by two equations where I have two variables, A and B.
The initial condition are that A0=1e17 and B0=0, they change simultaneously.
I wrote the following code using ODEINT:
import numpy as np
from scipy.integrate import odeint
def dmdt(m,t):
A, B = m
dAdt = A-B
dBdt = (A-B)*A
return [dAdt, dBdt]
# Create time domain
t = np.linspace(0, 100, 1)
# Initial condition
A0=1e17
B0=0
m0=[A0, B0]
solution = odeint(dmdt, m0, t)
Apparently I obtain an output different from the expected one but I don't understand the error.
Can someone help me?
Thanks
From A*A'-B'=0 one concludes
B = 0.5*(A^2 - A0^2)
Inserted into the first equation that gives
A' = A - 0.5*A^2 + 0.5*A0^2
= 0.5*(A0^2+1 - (A-1)^2)
This means that the A dynamic has two fixed points at about A0+1 and -A0+1, is growing inside that interval, the upper fixed point is stable. However, in standard floating point numbers there is no difference between 1e17 and 1e17+1. If you want to see the difference, you have to encode it separately.
Also note that the standard error tolerances atol and rtol in the range somewhere between 1e-6 and 1e-9 are totally incompatible with the scales of the problem as originally stated, also highlighting the need to rescale and shift the problem into a more appreciable range of values.
Setting A = A0+u with |u| in an expected scale of 1..10 then gives
B = 0.5*u*(2*A0+u)
u' = A0+u - 0.5*u*(2*A0+u) = (1-u)*A0 - 0.5*u^2
This now suggests that the time scale be reduced by A0, set t=s/A0. Also, B = A0*v. Insert the direct parametrizations into the original system to get
du/ds = dA/dt / A0 = (A0+u-A0*v)/A0 = 1 + u/A0 - v
dv/ds = dB/dt / A0^2 = (A0+u-A0*v)*(A0+u)/A0^2 = (1+u/A0-v)*(1+u/A0)
u(0)=v(0)=0
Now in floating point and the expected range for u, we get 1+u/A0 == 1, so effectively u'(s)=v'(s)=1-v which gives
u(s)=v(s)=1-exp(-s)`,
A(t) = A0 + 1-exp(-A0*t) + very small corrections
B(t) = A0*(1-exp(-A0*t)) + very small corrections
The system in s,u,v should be well-computable by any solver in the default tolerances.

Random number seed overlapping issue

I am using Matlab GPU computing to run a simulation. I suspect I may encounter a "random number seed" overlapping issue. My code is the following
N = 10000;
v = rand(N,1);
p = [0:0.1:1];
pA = [0:0.1:2];
[v,p,pA] = ndgrid(v,p,pA);
v = gpuArray(v);
p = gpuArray(p);
pA = gpuArray(pA);
t = 1;
bH = 0.9;
bL = 0.6;
a = 0.5;
Y = MyFunction(v,p,pA,t,bH,bL,a);
function[RA] = MyFunction(v,p,pA,t,bH,bL,a)
function[RA] = SSP1(v,p,pA)
RA = 0;
S1 = rand;
S2 = rand;
S3 = rand;
vA1 = (S1<a)*bH+(S1>=a)*bL;
vA2 = (S2<a)*bH+(S2>=a)*bL;
vA3 = (S3<a)*bH+(S3>=a)*bL;
if p<=t && pA>3*bL && pA<=3*bH
if pA>vA1+vA2+vA3
if v>=p
RA = p;
end
else
if v+vA1+vA2+vA3>=p+pA
RA = p+pA;
end
end
end
end
[RA] = gather(arrayfun(#SSP1,v,p,pA));
end
The idea of the code is the following:
I generate N random agents, which is characterized by the value of v. Then for each agent, I have to compute a quantity given (p,pA). As I have N agents and many combinations of (p,pA), I want to use GPU to speed up the process. But here comes a tricky thing:
for each agent, in order to finish my computation, I have to generate 3 extra random variables, vA1,vA2,vA3. Based on my understanding of GPU (I could be wrong), it does these computations simultaneously, i.e, for each agent v, it generates 3 random variables vA1,vA2,vA3. And GPU does this N procedures at the same time. However, I am not sure whether for agent 1 and agent 2, the corresponding vA1,vA2,vA3 may overlap? Because here N could be 1 million. I want to make sure that for all of these agents, the random number seed that is used to generate their corresponding vA1,vA2,vA3 won't overlap; otherwise, I am in big trouble.
There is a way to prevent this from happening, which is: I first generate 3N of these random variables vA1,vA2,vA3. Then I put them into my GPU. However, that may require a lot of GPU memory, which I don't have. The current method, I guess does not need too much GPU memory, as I am generating vA1,vA2,vA3 on the fly?
What you say does not happen. The proof is that the following code snipped generates random values in hB.
A=ones(100,1);
dA=gpuArray(A);
[hB] = gather(arrayfun(#applyrand,dA));
function dB=applyrand(dA)
r=rand;
dB=dA*r;
end
That said, your code has only 12 values for your random variables (4 for each) because for your use of S1, S2 and S3 you are basically flipping a coin:
vA1 = (S1<0.5)*bH+(S1>=0.5)*bL;
so vA1 is either 0, bH, bL or bH+bL.
Maybe this lack of variability is what is making you think that you don't have much randomness, not very clear from the question.

Algorithm for vitamin D concentration - problem with writting algorithm based on formulas

I am trying to implement in Matlab an algorithm that calculates the vitamin D concentration in the blood based on some formulas from an article. Main formula is:
where:
- T is the day of the year for which the concentration is measured;
- A is constant for the simplest measurement described in the journal
- E (sun exposure on particular month in a year) is given in the article
- R (vitamin D concentration after single exposure for the sunlight) can be calculated using formula
where F, alpha, beta are constants, t - day.
An Author of the article wrote that after calculating concentration using C(t) formula he added a constant value 33 in every day.
Formula for R(t) is simple and my chart is the same as in the article, but I have a problem with formula for calculating C(t).
This is my code:
function [C] = calculateConcentration(A,E,T,R)
C=zeros(1,T);
C(1) = E(1)*A*R(1);
month=1;
for i=2:(T)
for j=1:i
if mod(j,30)==0 && month<12
month=month+1;
end
C(i) = C(i)+E(month)*A*R(T-j+1);
end
month=1;
end
for i=1:T
C(i)=C(i)+33;
end
end
Here is my chart:
Here is the chart from the article:
So, I have two problems with this chart. First, the biggest values on my chart are smaller than values on the chart from the article and second, my chart is constantly growing.
Thank you very much in advance help.
[EDIT] I attach the values of all constants and a function to calculate R (t).
function [R]= calculateR(T)
R = zeros(1,T);
F = 13;
alpha = 30;
beta = 3;
for i=1:T
R(i)=F*(2.^(-i/alpha)-2.^(-i/beta));
end
end
A=0.1;
T=365;
R = calculateR(T);
E = [0.03, 0.06, 0.16, 0.25, 0.36, 0.96, 0.87, 0.89, 0.58, 0.24, 0.08, 0.02];
plot(1:T,R)
C = calculateConcentration(A,E,T,R);
figure; plot(1:T,C);
Code formatting is horrible in comments so posting this as an answer.
I have stated what I think (!) is the basic problem with your code in the comments.
Cumulative sums can get confusing very quickly, hence it is often better to write them more explicitly.
I would write the function like so:
function C = calculateConcentration(T, E, A, R)
c = zeros(1, T);
% compute contribution of each individual day
for t = 1:T
c(t) = E(mod(floor(t / 30), 12) +1) * A * R(t);
end
% add offset
c(1) = c(1) + 33;
C = cumsum(c);
end
Disclaimer: I haven't written any matlab code in years, and don't have it installed on this machine, so make sure to test this.
EDIT
Not sure if the author is plotting what you say he is plotting.
If you chose A to be 100 (this might be fine with the correct choice of units), apply the offset of c(1) to all values of c (in my implementation), don't actually take the cumulative sum, but return (lowercase) c instead, and then only plot the data from the midpoint in each month, then you get the following plot:
However, it is worth noting that if you plot all data points you get the following.
At face value, I would say whoever came up with this model is full of BS. But a more definitive answer would require a careful read of the paper.

Iteration of matrix-vector multiplication which stores specific index-positions

I need to solve a min distance problem, to see some of the work which has being tried take a look at:
link: click here
I have four elements: two column vectors: alpha of dim (px1) and beta of dim (qx1). In this case p = q = 50 giving two column vectors of dim (50x1) each. They are defined as follows:
alpha = alpha = 0:0.05:2;
beta = beta = 0:0.05:2;
and I have two matrices: L1 and L2.
L1 is composed of three column-vectors of dimension (kx1) each.
L2 is composed of three column-vectors of dimension (mx1) each.
In this case, they have equal size, meaning that k = m = 1000 giving: L1 and L2 of dim (1000x3) each. The values of these matrices are predefined.
They have, nevertheless, the following structure:
L1(kx3) = [t1(kx1) t2(kx1) t3(kx1)];
L2(mx3) = [t1(mx1) t2(mx1) t3(mx1)];
The min. distance problem I need to solve is given (mathematically) as follows:
d = min( (x-(alpha_p*t1_k - beta_q*t1_m)).^2 + (y-(alpha_p*t2_k - beta_q*t2_m)).^2 +
(z-(alpha_p*t3_k - beta_q*t3_m)).^2 )
the values x,y,z are three fixed constants.
My problem
I need to develop an iteration which can give me back the index positions from the combination of: alpha, beta, L1 and L2 which fulfills the min-distance problem from above.
I hope the formulation for the problem is clear, I have been very careful with the index notations. But if it is still not so clear... the step size for:
alpha is p = 1,...50
beta is q = 1,...50
for L1; t1, t2, t3 is k = 1,...,1000
for L2; t1, t2, t3 is m = 1,...,1000
And I need to find the index of p, index of q, index of k and index of m which gives me the min. distance to the point x,y,z.
Thanks in advance for your help!
I don't know your values so i wasn't able to check my code. I am using loops because it is the most obvious solution. Pretty sure that someone from the bsxfun-brigarde ( ;-D ) will find a shorter/more effective solution.
alpha = 0:0.05:2;
beta = 0:0.05:2;
L1(kx3) = [t1(kx1) t2(kx1) t3(kx1)];
L2(mx3) = [t1(mx1) t2(mx1) t3(mx1)];
idx_smallest_d =[1,1,1,1];
smallest_d = min((x-(alpha(1)*t1(1) - beta(1)*t1(1))).^2 + (y-(alpha(1)*t2(1) - beta(1)*t2(1))).^2+...
(z-(alpha(1)*t3(1) - beta(1)*t3(1))).^2);
%The min. distance problem I need to solve is given (mathematically) as follows:
for p=1:1:50
for q=1:1:50
for k=1:1:1000
for m=1:1:1000
d = min((x-(alpha(p)*t1(k) - beta(q)*t1(m))).^2 + (y-(alpha(p)*t2(k) - beta(q)*t2(m))).^2+...
(z-(alpha(p)*t3(k) - beta(q)*t3(m))).^2);
if d < smallest_d
smallest_d=d;
idx_smallest_d= [p,q,k,m];
end
end
end
end
end
What I am doing is predefining the smallest distance as the distance of the first combination and then checking for each combination rather the distance is smaller than the previous shortest distance.

for loop for time dependent parameter values in ode solver only works for some values of t

I'm using a simple if loop to change my parameter values within my ode script. Here is an example script I wrote that exhibits the same problem. So first the version which works:
function aah = al(t,x)
if (t>10000 && t<10300)
ab = [0; 150];
else
ab = [150; 0];
end
aah = [ab];
this can be run using
t = [0:1:10400];
x0 = [0,0];
[t,x] = ode23tb(#al, t,x0);
and visualised with
plot(t,x(:,1))
plot(t,x(:,2))
Ok that's the good version. Now if all you do is change t to
t = [0:1:12000];
the whole thing blows up. You might think it's just matlab averaging out the graph but it's not because if you look at
x(10300,2)
the answer should be the same in both cases because the code hasn't changed. but this second version outputs 0, which is wrong!
What on earth is going on? Anyone got an idea?
Thank you so much for any help
Your function is constant (except 10000 < t < 10300), and therefore the internal solver starts to solve the system with very large time step, 10% of total time by default. (In the adaptive ODE solver, if the system is constant, higher order and lower order solution will give the same solution, and the (estimated) error will be zero. So the solver assumes that current time step is good enough.) You can see if you give tspan with just two element, start and end time.
t = [0 12000];
Usually the tspan does not affect to the internal time step of solver. The solvers solve the system with their internal time step, and then just interpolate at tspan given by the user. So if the internal time step unfortunately "leap over" the interval [10000, 10300], the solver won't know about the interval.
So you better set the maximum step size, relatively smaller than 300.
options = odeset('MaxStep', 10);
[t, x] = ode23tb(#al, t, x0, options);
If you don't want to solve with small step size whole time (and if you "know" when the function are not constant), you should solve separately.
t1 = [0 9990];
t2 = [9990 10310];
t3 = [10310 12000];
[T1, x1] = ode23tb(#al, t1, x0);
[T2, x2] = ode23tb(#al, t2, x1(end,:));
[T3, x3] = ode23tb(#al, t3, x2(end,:));
T = [T1; T2(2:end); T3(2:end)];
x = [x1; x2(2:end,:); x3(2:end,:)];