How do I increase stepsize in MATLAB? - matlab

For my computing course, I am asked to solve an ODE, using Euler's method.
My code runs, however, I am now asked the following:
"Increase N a number of times, according to N=100,200,400,800.... so you can obtain the answers y100,y200,y400,y800...."
Here's my code:
function [xar,yar] = eulsol(a,b,ybouco,N)
h=(b-a)/N;
T=a:h:b;
y(1)=ybouco;
for i = 100:N
f(i) = -8*y(i) + 0.5*T(i) + (1/16);
y(i+1) = y(i)+h*f(i);
end
xar=T;
yar=y;
end
Can someone help me with obtaining a nice table in MATLAB, which shows me the arrays x and y, according to an increasing N (100,200,400,800....)?

Let's define K as the number of steps. In your example, K=4 (N=100,200,400,800). If N=100,200,400,800,1600,3200 then K = 6
Note that the ith element of N correspond to 100*2^(i-1):
i = 1 => N = 100 * 2^(1-1) = 100
i = 2 => N = 100 * 2^(2-1) = 200
i = 3 => N = 100 * 2^(3-1) = 400
and so on...
So if you want to calculate for N=100,200,400,800, your code should be:
function [xar,yar] = eulsol(a,b,ybouco,K)
N_max = 100 * 2^(K-1)
h=(b-a)/N_max;
T=a:h:b;
y(1)=ybouco;
for i = 1:K
N = 100 * 2^(i-1)
f(N) = -8*y(N) + 0.5*T(N) + (1/16);
y(N+1) = y(N)+h*f(N);
end
xar=T;
yar=y;
end
This answer if for creating the correct N inside the for loop, but you should review your code! As you can see: for i = 1, you have N = 100 and to calculate F(100) you need y(100), but you don't have y(100), just y(1).
Maybe the correct answer is F(i) = -8*y(i) + 0.5*T(N) + (1/16);
But again, want is T(N)?
Please, as noted by #Argyll , explain what you want, you shouldn't expect people to understand your question from a wrong code.

Related

How can I vectorize slow code in MATLAB to increase peformance?

How can I vectorize this code? At the moment it runs so slow. I'm really stuck and I've spent the last couple of hours trying to vectorize it, however I can't seem to get it to work correctly.
My naive program below works incredibly slowly. N should really be 10,000 but the program is struggling with N = 100. Any advice would be appreciated.
The code wants to iterate through the functions given N times for each value w21. It then plots the last 200 values for each value of w21. The code below does work as expected in terms of the plot but as mentioned is far to slow since for a good plot the values need to be in the thousands.
hold on
% Number of iterations
N = 100;
x = 1;
y = 1;
z = 1;
for w21 = linspace(-12,-3,N)
for i = 1:N-1
y = y_iterate(x,z,w21);
z = z_iterate(y);
x = x_iterate(y);
if i >= (N - 200)
p = plot(w21,x,'.k','MarkerSize',3);
end
end
end
Required functions:
function val = x_iterate(y)
val = -3 + 8.*(1 ./ (1 + exp(-y)));
end
function val = z_iterate(y)
val = -7 + 8.*(1 ./ (1 + exp(-y)));
end
function val = y_iterate(x,z,w21)
val = 4 + w21.*(1 ./ (1 + exp(-x))) + 6.*(1 ./ (1 + exp(-z)));
end
I believe it's because of plot. Try:
[X,Y,Z] = deal( zeros(N,N-1) );
w21 = linspace(-12,-3,N);
for i = 1:N
for j = 1:N-1
y = y_iterate(x,z,w21(i));
z = z_iterate(y);
x = x_iterate(y);
X(i,j) = x;
Y(i,j) = y;
Z(i,j) = z;
end
end
nn = max(1,N-200);
plot(w21,X(nn:end,:),'.k')

How to store two variables (x.y) from a loop?

I am generating two different coordinates (x, y) within a loop. In my code, I have just realised that it is saving the last variable from the loop. I am, however, trying to save all the iterations from the setsize variable. I already tried to save using something like:
circleposition = [0:length(setsize) x(i),y(i)];
But, it seems that I am not doing it correctly, getting the following error:
Subscript indices must either be real positive integers or logicals.-
Error using vertcat
Dimensions of matrices being concatenated are not consistent.
Here is my original code:
setsize = 9;
r = 340;
cx = 500;
cy = 500;
anglesegment = 2 * pi/setsize;
circleposition = [];
for i = drange (0:setsize)
x = r * cos(i*anglesegment) + cx;
y = r * sin(i*anglesegment) + cy;
circleposition = [x,y];
end
Output:
circleposition =
0 1.0000
840.0000 500.0000
It runs only with the first/last iteration. I need to get 9 x's and 9 y's (depending the setsize, variable).
It's kind of hard to follow, which error message comes from which attempt, but let's have a look.
I don't have access to the Parallel Computing Toolbox, which seems necessary to use the for-loop over distributed range drange, but I assume, this loop can be replaced by for i = 0:setsize for testing.
Now, when starting at i = 0, you would try to access x(0) and y(0), which is not allowed (Subscript indices must either be real positive integers or logicals). Also, you would get 10 values instead of 9, as you stated in your question. So, let's start at i = 1.
To store all 9 pairs of x and y, your circleposition should be an 9 x 2 array So, initialize that by, for example, circleposition = zeros(setsize, 2).
Last, you need to use proper indexing to store [x, y] at the i-th row of circleposition, i.e. circleposition(i, :).
So, the corrected code (attention on the replaced drange part) could look like this:
setsize = 9;
r = 340;
cx = 500;
cy = 500;
anglesegment = 2 * pi/setsize;
circleposition = zeros(setsize, 2); % Initialize circleposition appropriately
for i = 1:setsize % Start at i = 1
x = r * cos(i*anglesegment) + cx;
y = r * sin(i*anglesegment) + cy;
circleposition(i, :) = [x, y]; % Correct indexing of the row
end
circleposition % Output
The output would then be:
circleposition =
760.46 718.55
559.04 834.83
330.00 794.45
180.50 616.29
180.50 383.71
330.00 205.55
559.04 165.17
760.46 281.45
840.00 500.00
On the second error (Error using vertcat. Dimensions of matrices being concatenated are not consistent.): I don't see, where you used vertical concatenation at all!?
Hear is code that works:
setsize = 9;
r = 340;
cx = 500;
cy = 500;
anglesegment = 2 * pi/setsize;
circleposition = zeros(setsize + 1, 2); % Changed from circleposition = []
for i = drange (0:setsize)
x = r * cos(i*anglesegment) + cx;
y = r * sin(i*anglesegment) + cy;
circleposition((i+1),:) = [x,y]; % Changed from circleposition = [x,y];
end
Explanation:
The fix was Changing circleposition = [x,y]; to circleposition((i+1),:) = [x,y]. Without ((i+1),:), you are changing the data of circleposition, not adding to it.
Changing circleposition = []; to circleposition = zeros(setsize + 1, 2); was not required, its just recommended to allocate memory for speed, not an issue for small number of elements.

MATLAB - Finding Zero of Sum of Functions by Iteration

I am trying to sum a function and then attempting to find the root of said function. That is, for example, take:
Consider that I have a matrix,X, and vector,t, of values: X(2*n+1,n+1), t(n+1)
for j = 1:n+1
sum = 0;
for i = 1:2*j+1
f = #(g)exp[-exp[X(i,j)+g]*(t(j+1)-t(j))];
sum = sum + f;
end
fzero(sum,0)
end
That is,
I want to evaluate at
j = 1
f = #(g)exp[-exp[X(1,1)+g]*(t(j+1)-t(j))]
fzero(f,0)
j = 2
f = #(g)exp[-exp[X(1,2)+g]*(t(j+1)-t(j))] + exp[-exp[X(2,2)+g]*(t(j+1)-t(j))] + exp[-exp[X(3,2)+g]*(t(j+1)-t(j))]
fzero(f,0)
j = 3
etc...
However, I have no idea how to actually implement this in practice.
Any help is appreciated!
PS - I do not have the symbolic toolbox in Matlab.
I suggest making use of matlab's array operations:
zerovec = zeros(1,n+1); %preallocate
for k = 1:n+1
f = #(y) sum(exp(-exp(X(1:2*k+1,k)+y)*(t(k+1)-t(k))));
zerovec(k) = fzero(f,0);
end
However, note that the sum of exponentials will never be zero, unless the exponent is complex. Which fzero will never find, so the question is a bit of a moot point.
Another solution is to write a function:
function [ sum ] = func(j,g,t,X)
sum = 0;
for i = 0:2*j
f = exp(-exp(X(i+1,j+1)+g)*(t(j+3)-t(j+2)));
sum = sum + f;
end
end
Then loop your solver
for j=0:n
fun = #(g)func(j,g,t,X);
fzero(fun,0)
end

find optimum values of model iteratively

Given that I have a model that can be expressed as:
y = a + b*st + c*d2
where st is a smoothed version of some data, and a, b and c are model coffieicients that are unknown. An iterative process should be used to find the best values for a, b, and c and also an additional value alpha, shown below.
Here, I show an example using some data that I have. I'll only show a small fraction of the data here to get an idea of what I have:
17.1003710350253 16.7250000000000 681.521316544969
17.0325989276234 18.0540000000000 676.656460644882
17.0113862864815 16.2460000000000 671.738125420192
16.8744356336601 15.1580000000000 666.767363772145
16.5537077980594 12.8830000000000 661.739644621949
16.0646524243248 10.4710000000000 656.656219934146
15.5904357723302 9.35000000000000 651.523986525985
15.2894427136087 12.4580000000000 646.344231349275
15.1181450512182 9.68700000000000 641.118300709434
15.0074128442766 10.4080000000000 635.847600747838
14.9330905954828 11.5330000000000 630.533597865332
14.8201069920058 10.6830000000000 625.177819082427
16.3126863409751 15.9610000000000 619.781852331734
16.2700386755872 16.3580000000000 614.347346678083
15.8072873786912 10.8300000000000 608.876012461843
15.3788908036751 7.55000000000000 603.369621360944
15.0694302370038 13.1960000000000 597.830006367160
14.6313314652840 8.36200000000000 592.259061672302
14.2479738025295 9.03000000000000 586.658742460043
13.8147156115234 5.29100000000000 581.031064599264
13.5384821473624 7.22100000000000 575.378104234926
13.3603543306796 8.22900000000000 569.701997272687
13.2469020140965 9.07300000000000 564.004938753678
13.2064193251406 12.0920000000000 558.289182116093
13.1513460035983 12.2040000000000 552.557038340513
12.8747853506079 4.46200000000000 546.810874976187
12.5948999131388 4.61200000000000 541.053115045791
12.3969691298003 6.83300000000000 535.286235826545
12.1145822760120 2.43800000000000 529.512767505944
11.9541188991626 2.46700000000000 523.735291710730
11.7457790927936 4.15000000000000 517.956439908176
11.5202981254529 4.47000000000000 512.178891679167
11.2824263926694 2.62100000000000 506.405372863054
11.0981930749608 2.50000000000000 500.638653574697
10.8686514170776 1.66300000000000 494.881546094641
10.7122053911554 1.68800000000000 489.136902633882
10.6255883267131 2.48800000000000 483.407612975178
10.4979083986908 4.65800000000000 477.696601993434
10.3598092538338 4.81700000000000 472.006827058220
10.1929490084608 2.46700000000000 466.341275322034
10.1367069580204 2.36700000000000 460.702960898512
10.0194072271384 4.87800000000000 455.094921935306
9.88627023967911 3.53700000000000 449.520217586971
9.69091601129389 0.417000000000000 443.981924893704
9.48684595125235 -0.567000000000000 438.483135572389
9.30742664359900 0.892000000000000 433.026952726910
9.18283037670750 1.50000000000000 427.616487485241
9.02385722622626 1.75800000000000 422.254855571341
8.90355705229410 2.46700000000000 416.945173820367
8.76138912769045 1.99200000000000 411.690556646207
8.61299614111510 0.463000000000000 406.494112470755
8.56293606861698 6.55000000000000 401.358940124780
8.47831879772002 4.65000000000000 396.288125230599
8.42736865902327 6.45000000000000 391.284736577104
8.26325535934842 -1.37900000000000 386.351822497948
8.14547793724500 1.37900000000000 381.492407263967
8.00075641792910 -1.03700000000000 376.709487501030
7.83932517791044 -1.66700000000000 372.006028644665
7.68389447250257 -4.12900000000000 367.384961442799
7.63402151555169 -2.57900000000000 362.849178517935
The results that follow probably won't be meaningful as the full data would be needed (but this is an example). Using this data I have tried to solve iteratively by
y = d(:,1);
d1 = d(:,2);
d2 = d(:,3);
alpha_o = linspace(0.01,1,10);
a = linspace(0.01,1,10);
b = linspace(0.01,1,10);
c = linspace(0.01,1,10);
defining different values for a, b, and c as well as another term alpha, which is used in the model, and am now going to find every possible combination of these parameters and see which combination provides the best fit to the data:
% every possible combination of values
xx = combvec(alpha_o,a,b,c);
% loop through each possible combination of values
for j = 1:size(xx,2);
alpha_o = xx(1,j);
a_o = xx(2,j);
b_o = xx(3,j);
c_o = xx(4,j);
st = d1(1);
for i = 2:length(d1);
st(i) = alpha_o.*d1(i) + (1-alpha_o).*st(i-1);
end
st = st(:);
y_pred = a_o + (b_o*st) + (c_o*d2);
mae(j) = nanmean(abs(y - y_pred));
end
I can then re-run the model using these optimum values:
[id1,id2] = min(mae);
alpha_opt = xx(:,id2);
st = d1(1);
for i = 2:length(d1);
st(i) = alpha_opt(1).*d1(i) + (1-alpha_opt(1)).*st(i-1);
end
st = st(:);
y_pred = alpha_opt(2) + (alpha_opt(3)*st) + (alpha_opt(4)*d2);
mae_final = nanmean(abs(y - y_pred));
However, to reach a final answer I would need to increase the number of initial guesses to more than 10 for each variable. This will take a long time to run. Thereofre, I am wondering if there is a better method for what I am trying to do here? Any advice is appreciated.
Here's some thoughts: If you could decrease the amount of computation within each for loop, you could possibly speed it up. One possible way is to look for common factors between each loop and move it outside for loop:
If you look at the iteration, you'll see
st(1) = d1(1)
st(2) = a * d1(2) + (1-a) * st(1) = a *d1(2) + (1-a)*d1(1)
st(3) = a * d1(3) + (1-a) * st(2) = a * d1(3) + a *(1-a)*d1(2) +(1-a)^2 * d1(1)
st(n) = a * d1(n) + a *(1-a)*d1(n-1) + a *(1-a)^2 * d1(n-2) + ... +(1-a)^(n-1)*d1(1)
Which means st can be calculated by multiplying these two matrices (here I use n=4 for example to illustrate the concept) and sum along the first dimension:
temp1 = [ 0 0 0 a ;
0 0 a a(1-a) ;
0 a a(1-a) a(1-a)^2 ;
1 (1-a) (1-a)^2 (1-a)^3 ;]
temp2 = [ 0 0 0 d1(4) ;
0 0 d1(3) d1(3) ;
0 d1(2) d1(2) d1(2) ;
d1(1) d1(1) d1(1) d1(1) ;]
st = sum(temp1.*temp2,1)
Here's codes that utilize this concept: Computation has been moved out of the inner for loop and only assignment is left.
alpha_o = linspace(0.01,1,10);
xx = nchoosek(alpha_o, 4);
n = size(d1,1);
matrix_d1 = zeros(n, n);
d2 = d2'; % To make the dimension of d2 and st the same.
for ii = 1:n
matrix_d1(n-ii+1:n, ii) = d1(1:ii);
end
st = zeros(size(d1)'); % Pre-allocation of matrix will improve speed.
mae = zeros(1,size(xx,1));
matrix_alpha = zeros(n, n);
for j = 1 : size(xx,1)
alpha_o = xx(j,1);
temp = (power(1-alpha_o, [0:n-1])*alpha_o)';
matrix_alpha(n,:) = power(1-alpha_o, [0:n-1]);
for ii = 2:n
matrix_alpha(n-ii+1:n-1, ii) = temp(1:ii-1);
end
st = sum(matrix_d1.*matrix_alpha, 1);
y_pred = xx(j,2) + xx(j,3)*st + xx(j,4)*d2;
mae(j) = nanmean(abs(y - y_pred));
end
Then :
idx = find(min(mae));
alpha_opt = xx(idx,:);
st = zeros(size(d1)');
temp = (power(1-alpha_opt(1), [0:n-1])*alpha_opt(1))';
matrix_alpha = zeros(n, n);
matrix_alpha(n,:) = power(1-alpha_opt(1), [0:n-1]);;
for ii = 2:n
matrix_alpha(n-ii+1:n-1, ii) = temp(1:ii-1);
end
st = sum(matrix_d1.*matrix_alpha, 1);
y_pred = alpha_opt(2) + (alpha_opt(3)*st) + (alpha_opt(4)*d2);
mae_final = nanmean(abs(y - y_pred));
Let me know if this helps !

Matlab Genetic Algorithm for a non-genetic case

I've never used optimization tools, but I think I have to use now, so I'm a bit lost. After using the answer given by #A. Donda, I have noticed that maybe that is not the best solution because every time I run the function it gives a different matrix 'pares' and in the majority of times it says that I need more evaluations. So I was thinking that maybe the answer to my problem are Genetic Algorithms optimization, but once again I do not know how to work with them.
My first problem is described below and the answer by #A. Donda is in the only post of a answer. I really need this optimization done and I don't know how to proceed for this case with GA tools.
Thank you so much in advance again, and thank you #A. Donda for your answer.
As asked, I tried to put here the code that I was trying to explain, I hope it will result:
function opt_pares
clear all; clc; close all;
h = randi(24,8760,1);
nd = randi(365,8760,1);
veic = randi(333,8760,1);
max_veic = max(veic);
veicN = veic./max_veic;
Gh = randi(500,8760,1);
Dh = randi(500,8760,1);
Ih = Gh-Dh;
A = randi([300 800], 27,1);
max_Gh = max(Gh);
max_Dh = max(Dh);
max_Ih = max(Ih);
lat = 70;
HRA =15.*(h-12);
decl = 23.27*sind(360*(284+nd)/365);
Ii = zeros(8760,27);
Di = zeros(8760,27);
Gi = zeros(8760,27);
pares = randi([0,90],27,2);
inclin = pares(:,1);
azim = pares(:,2);
% for MATRIZC
for n=1:27
Ii(:,n) = Ih.*(sind(decl).*sind(lat).*cosd(inclin(n))-sind(decl).*cosd(lat).*sind(inclin(n)).*cosd(azim(n))+cosd(decl).*cosd(lat).*cosd(inclin(n)).*cosd(HRA)+cosd(decl).*sind(lat).*sind(inclin(n)).*cosd(azim(n)).*cosd(HRA)+cosd(decl).*sind(inclin(n)).*sind(azim(n)).*sind(HRA));
Di(:,n) = 0.5*Dh.*(1+cosd(inclin(n)));
Gi(:,n) = (Ii(:,n)+Di(:,n))*A(n,1);
end
Gparque = sum(Gi,2);
max_Gparque = max(Gparque);
GparqueN = Gparque./max_Gparque;
RMSE = sqrt(mean((GparqueN-veicN).^2));
% end
end
I don't know if it is possible, maybe this time I can be more assertive.
My main goal is to achieve the best 'RMSE' possible, to do so I have to create a matrix ('pares') where each line contains a pair of values (one value from each column).
These values have to be within a certain range(0-90). With each of this 27 pairs I have to calculate 'Ii'/'Gi'/'Di', giving me a matrix with a size like 8760*27.
Then I make a sum of 'Gi' to have 'Gparque'(vector 8760*1) and finally I I calculate 'RMSE'. When I have RMSE calculated, I have to modify the matrix 'pares' to other values that can result in a better RMSE. Once there are many combinations of 27 values that can be within the 0-90 range, I have to get a solution that can optimize this search for the minimum RMSE.
The parts that are in comments in the code (a for loop with 'pares') is the thing that I have no idea how to do, because I have to change the values of 'pares' but with some optimization criteria that can approximate the minimum of RMSE.
I hope this time I have explain this doubt better.
Thank you very much!
OK, so here is an attempt at a question. I'm not sure how useful the results will be in the end, because I don't understand the underlying problem and I don't have real data to test it with.
You were right that you need an optimization algorithm, your problem appears to be more complex than simple linear algebra. For optimization I use the function fminsearch from the Optmization Toolbox.
First the function whose value is to be optimized (the objective function) needs to be defined. Based on your code, this is
function RMSE = fun(pares)
inclin = pares(:,1);
azim = pares(:,2);
Ii = zeros(8760,27);
Di = zeros(8760,27);
Gi = zeros(8760,27);
for n=1:27
Ii(:,n) = Ih.*(sind(decl).*sind(lat).*cosd(inclin(n))-sind(decl).*cosd(lat).*sind(inclin(n)).*cosd(azim(n))+cosd(decl).*cosd(lat).*cosd(inclin(n)).*cosd(HRA)+cosd(decl).*sind(lat).*sind(inclin(n)).*cosd(azim(n)).*cosd(HRA)+cosd(decl).*sind(inclin(n)).*sind(azim(n)).*sind(HRA));
Di(:,n) = 0.5*Dh.*(1+cosd(inclin(n)));
Gi(:,n) = (Ii(:,n)+Di(:,n))*A(n,1);
end
Gparque = sum(Gi,2);
max_Gparque = max(Gparque);
GparqueN = Gparque./max_Gparque;
RMSE = sqrt(mean((GparqueN-veicN).^2));
end
Now we can call
pares_opt = fminsearch(#fun, randi([0,90],27,2))
using random initialization. The optimization takes quite a while because the objective function is not very efficiently implemented. Here's a vectorized version that does the same:
% precompute
cHRA = cosd(HRA);
sHRA = sind(HRA);
sdecl = sind(decl);
cdecl = cosd(decl);
slat = sind(lat);
clat = cosd(lat);
function RMSE = fun(pares)
% precompute
cinclin = cosd(pares(:,1))';
sinclin = sind(pares(:,1))';
cazim = cosd(pares(:,2))';
sazim = sind(pares(:,2))';
Ii = bsxfun(#times, Ih, ...
sdecl * (slat * cinclin - clat * sinclin .* cazim) ...
+ (cdecl .* cHRA) * (clat * cinclin + slat * sinclin .* cazim) ...
+ (cdecl .* sHRA) * (sinclin .* sazim));
Di = 0.5 * Dh * (1 + cinclin);
Gi = (Ii + Di) * diag(A);
Gparque = sum(Gi,2);
max_Gparque = max(Gparque);
GparqueN = Gparque./max_Gparque;
RMSE = sqrt(mean((GparqueN-veicN).^2));
end
We have not yet implemented the constraint for pares to lie within [0, 90]. A crude way to do this is to insert these lines:
if any(pares(:) < 0) || any(pares(:) > 90)
RMSE = inf;
return
end
at the beginning of the objective function.
Putting it all together:
function Raquel
h = randi(24,8760,1);
nd = randi(365,8760,1);
veic = randi(333,8760,1);
max_veic = max(veic);
veicN = veic./max_veic;
Gh = randi(500,8760,1);
Dh = randi(500,8760,1);
Ih = Gh-Dh;
A = randi([300 800], 27,1);
lat = 70;
HRA =15.*(h-12);
decl = 23.27*sind(360*(284+nd)/365);
% precompute
cHRA = cosd(HRA);
sHRA = sind(HRA);
sdecl = sind(decl);
cdecl = cosd(decl);
slat = sind(lat);
clat = cosd(lat);
pares_opt = fminsearch(#fun, randi([0,90],27,2))
function RMSE = fun(pares)
if any(pares(:) < 0) || any(pares(:) > 90)
RMSE = inf;
return
end
% precompute
cinclin = cosd(pares(:,1))';
sinclin = sind(pares(:,1))';
cazim = cosd(pares(:,2))';
sazim = sind(pares(:,2))';
Ii = bsxfun(#times, Ih, ...
sdecl * (slat * cinclin - clat * sinclin .* cazim) ...
+ (cdecl .* cHRA) * (clat * cinclin + slat * sinclin .* cazim) ...
+ (cdecl .* sHRA) * (sinclin .* sazim));
Di = 0.5 * Dh * (1 + cinclin);
Gi = (Ii + Di) * diag(A);
Gparque = sum(Gi,2);
max_Gparque = max(Gparque);
GparqueN = Gparque./max_Gparque;
RMSE = sqrt(mean((GparqueN-veicN).^2));
end
end
With simulated data, if I run the optimization twice on the same randomized data but different initial values I get different solutions. This is an indication that there is more than one local minimum of the objective function. Hopefully, this will not be the case with real data.