I have a vectorized function which is an objective function for an optimizer (genetic algorithm).
Inside this function, there is a fast optimization which is part of the computation of this function as follow:
function error = ObjectiveFunction(a, b, c)
x = a.*b;
y = c.*b;
z = patternsearch(#fun, [x, y]);
error = x+y.*z;
end
solution = ga(#ObjectiveFunction, 'vectorized', true);
ObjectiveFunction accepts vector of solution. This makes ga works faster. However, since inside ObjectiveFunction there is patternsearch process, this vectorization will be useless because patternsearch (as an optimizer) does not work in vectorized manner.
So I had to edit my function to:
function error = ObjectiveFunction(a, b, c)
x = a.*b;
y = c.*b;
for i = 1:size(x,1)
z(i) = patternsearch(#fun, [x(i), y(i)]);
end
error = x+y.*z;
end
Is there anyway to replace the loop with a vectorized call to patternsearch?
Please consider using arrayfun as follow
function error = ObjectiveFunction(a, b, c)
x = a.*b;
y = c.*b;
z = arrayfun(#(x1,y1) patternsearch(#fun, [x1, y1]),x,y);
error = x+y.*z;
end
I hope this may help
Related
Hi i've been asked to solve SIR model using fsolve command in MATLAB, and Euler 3 point backward. I'm really confused on how to proceed, please help. This is what i have so far. I created a function for 3BDF scheme but i'm not sure how to proceed with fsolve and solve the system of nonlinear ODEs. The SIR model is shown as and 3BDF scheme is formulated as
clc
clear all
gamma=1/7;
beta=1/3;
ode1= #(R,S,I) -(beta*I*S)/(S+I+R);
ode2= #(R,S,I) (beta*I*S)/(S+I+R)-I*gamma;
ode3= #(I) gamma*I;
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
R0=0;
I0=10;
S0=8e6;
odes={ode1;ode2;ode3}
fun = #root2d;
x0 = [0,0];
x = fsolve(fun,x0)
function [xs,yb] = ThreePointBDF(f,x0, xmax, h, y0)
% This function should return the numerical solution of y at x = xmax.
% (It should not return the entire time history of y.)
% TO BE COMPLETED
xs=x0:h:xmax;
y=zeros(1,length(xs));
y(1)=y0;
yb(1)=y0+f(x0,y0)*h;
for i=1:length(xs)-1
R =R0;
y1(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - R, y1(i-1,:)+2*h*F(i,:))
S = S0;
y2(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - S, y2(i-1,:)+2*h*F(i,:))
I= I0;
y3(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - I, y3(i-1,:)+2*h*F(i,:))
end
end
You have an implicit equation
y(i+1) - 2*h/3*f(t(i+1),y(i+1)) = G = (4*y(i) - y(i-1))/3
where the right-side term G is constant in the call to fsolve, that is, during the solution of the implicit step equation.
Note that this is for the vector valued system y'(t)=f(t,y(t)) where
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
To solve this write
G = (4*y(i,:) - y(i-1,:))/3
y(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - G, y(i-1,:)+2*h*F(i,:))
where a midpoint step is used to get an order 2 approximation as initial guess, F(i,:)=f(t(i),y(i,:)). Add solver options for error tolerances as necessary, you want the error in the implicit equation smaller than the truncation error O(h^3) of the step. One can also keep only a short array of function values, then one has to be careful for the correspondence of the position in the short array to the time index.
Using all that and a reference solution by a higher order standard solver produces the following error graphs for the components
where one can see that the first order error of the constant first step results in a first order global error, while with a second order error in the first step using the Euler method results in a clear second order global error.
Implement the method in general terms
from scipy.optimize import fsolve
def BDF2(f,t,y0,y1):
N, h = len(t)-1, t[1]-t[0];
y = (N+1)*[np.asarray(y0)];
y[1] = y1;
for i in range(1,N):
t1, G = t[i+1], (4*y[i]-y[i-1])/3
y[i+1] = fsolve(lambda u: u-2*h/3*f(t1,u)-G, y[i-1]+2*h*f(t[i],y[i]), xtol=1e-3*h**3)
return np.vstack(y)
Set up the model to be solved
gamma=1/7;
beta=1/3;
print beta, gamma
y0 = np.array([8e6, 10, 0])
P = sum(y0); y0 = y0/P
def f(t,y): S,I,R = y; trns = beta*S*I/(S+I+R); recv=gamma*I; return np.array([-trns, trns-recv, recv])
Compute a reference solution and method solutions for the two initialization variants
from scipy.integrate import odeint
tg = np.linspace(0,120,25*128)
yg = odeint(f,y0,tg,atol=1e-12, rtol=1e-14, tfirst=True)
M = 16; # 8,4
t = tg[::M];
h = t[1]-t[0];
y1 = BDF2(f,t,y0,y0)
e1 = y1-yg[::M]
y2 = BDF2(f,t,y0,y0+h*f(0,y0))
e2 = y2-yg[::M]
Plot the errors, computation as above, but embedded in the plot commands, could be separated in principle by first computing a list of solutions
fig,ax = plt.subplots(3,2,figsize=(12,6))
for M in [16, 8, 4]:
t = tg[::M];
h = t[1]-t[0];
y = BDF2(f,t,y0,y0)
e = (y-yg[::M])
for k in range(3): ax[k,0].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
y = BDF2(f,t,y0,y0+h*f(0,y0))
e = (y-yg[::M])
for k in range(3): ax[k,1].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
for k in range(3):
for j in range(2): ax[k,j].set_ylabel(["$e_S$","$e_I$","$e_R$"][k]); ax[k,j].legend(); ax[k,j].grid()
ax[0,0].set_title("Errors: first step constant");
ax[0,1].set_title("Errors: first step Euler")
I've noticed some weird facts about integral2. These are probably due to my limitations in understanding how it works. I have some difficulties in integrating out variables when I have particular functions. For instance, look at the following Code:
function Output = prova(p,Y)
x = p(1);
y = p(2);
w = p(3);
z = p(4);
F1 = #(Data,eta_1,eta_2,x,y,w,z) F2(eta_1,eta_2,Data) .* normpdf(eta_1,x,y) .* normpdf(eta_2,w,z);
Output = integral2(#(eta_1,eta_2)F1(Y,eta_1,eta_2,0,1,10,2),-5,5,-5,5);
end
function O = F2(pp1,pp2,D)
O = pp1 + pp2 + sum(D);
end
In this case the are no problems in evaluating the integral. But if I change the code in this way I obtain some errors, although the output of F2 is exactly the same:
function Output = prova(p,Y)
x = p(1);
y = p(2);
w = p(3);
z = p(4);
F1 = #(Data,eta_1,eta_2,x,y,w,z) F2(eta_1,eta_2,Data) .* normpdf(eta_1,x,y) .* normpdf(eta_2,w,z);
Output = integral2(#(eta_1,eta_2)F1(Y,eta_1,eta_2,0,1,10,2),-5,5,-5,5);
end
function O = F2(pp1,pp2,D)
o = sum([pp1 pp2]);
O = o + sum(D);
end
The problems increase if F2 for example have some matrix multiplication in which "eta_1" and "eta_2", which I want to integrate out, are involved. This problems makes practically impossible to solve computations in which, for instance, we have to integrate out a variable X which is inside a Likelihood Function (whose calculation could require some internal Loop, or Sum, or Prod involving our variable X). What is the solution?
I am trying to feed a function handle into the function I created below. I'm not exactly sure how to do this.
For example, how do I get:
conjugate_gradient(#(y) ABC(y), column_vector, initial_guess)
to not error?
If I use matlab's pcg function in the same way it will work:
pcg(#(y) ABC(y),b,tol).
I tried reading the pcg function, and they do take about this in the function description, however I'm still super inexperienced with MATLAB and had shall we say some difficulty understanding what they did.Thank You!
function [x] = conjugate_gradient(matrix, column_vector, initial_guess)
y = [];
col_size = length(column_vector);
temp = size(matrix);
mat_row_len = temp(2);
% algorithm:
r_cur = column_vector - matrix * initial_guess;
p = r_cur;
k = 0;
x_approx = initial_guess;
for i=1:mat_row_len
alpha = ( r_cur.' * r_cur ) / (p.' *(matrix* p));
x_approx = x_approx + alpha * p;
r_next = r_cur - alpha*(matrix * p);
fprintf(num2str(r_next'*r_next), num2str(i))
y = [y; i, r_next'*r_next];
%exit condition
if sqrt(r_next'*r_next) < 1e-2
y
break;
end
beta = (r_next.'* r_next )/(r_cur.' * (r_cur) );
p = r_next + beta * p;
k = k+1;
r_cur = r_next;
end
y
[x] = x_approx;
end
When you do
f = #(y) ABC(y)
You create a function handle. (Note that in this case it's the same as f=#ABC). This handle is a variable, and this can be passed to a function, but is otherwise the same as the function. Thus:
f(1)
is the same as calling
ABC(1)
You pass this handle to a function as the first argument, which you have called matrix. This seems misleading, as the variable matrix will now be a function handle, not a matrix. Inside your function you can do matrix(y) and evaluate the function for y.
However, reading your function, it seems that you treat the matrix input as an actual matrix. This is why you get errors. You cannot multiply it by a vector and expect a result!
I have an integral expression which I defined on Matlab using
x = 0:1/1000:1;
g = #(x) (exp(-1./x.^2).*heaviside(x)).*(exp(-1./(1-x).^2).*heaviside(1-x));
t = 0:1/1000:1;
f = zeros(size(t));
for i = 1:length(t)
f(i) = integral(g,0,t(i));
end
I can plot it, for example, using plot(t,f), but for other purposes I would like to attach a function handle to f, i.e. something like f = #(t) zeros(size(t)). I have not been able to figure it out thus far. f = #(t) integral(#(x)g(x),0,t) is also not sufficient.
Sorry, I can't comment yet. But does this work?
funcHand= #(t) integral(g,0,t);
You don't have to define x in your code above, since the input to integral is a function handle.
Then to check it's the same:
f2 = zeros(size(t));
for i = 1:length(t)
f2(i) = funcHand(t(i));
end
Whoops, the other answer said all the above (just replaced the for loop with arrayfun. I didn't see it while writing the answer.
Edit
If you want to build-in the for loop, try:
funcHand= #(t) arrayfun(#(u) integral(g, 0, u),t);
And test:
plot(funcHand(t))
Try
f = #(u) integral(g, 0, u)
The additional level of indirection in g seems superfluous. Note that I have called the input u. Keep in mind that f will not accept vectors as its inputs. So doing something like f(t) in your current workspace will not create the same array as your for loop is doing. You will have to iterate through the array. The convenience function arrayfun will do this for you:
o = arrayfun(f, t)
It is roughly equivalent to the loop you have now:
o = zeros(size(t));
for i = 1:length(o)
o(i) = f(t(i));
end
arrayfun can actually be incorporated into your function handle to allow it to process vector arguments:
h = #(t) arrayfun(f, t)
To avoid the proliferation of unnecessary function handles, you can do
f = #(t) arrayfun(#(u) integral(g, 0, u), t)
I want to create a function handle to the function:
f = #(x) (x-1)*(x-2)*...*(x-50);
How can I do this in MATLAB without typing all 50 terms?
Here is a vectorized solution:
y = prod((x-[1:50]))
Or if you want an anonymous function:
f = #(x) ( prod((x-[1:50])) )
By the way, it might not be faster than #Chris solution (which is good, and I upvoted it), because of Matlab JIT-Accelerator.
You could wrap it in a function. For example,
function y = myfunc(x, n)
y = 1.;
for i = 1:n
y = y*(x-i);
end
end
The function you defined is basically the product of a sequence, which are trivially written as for loops.
In your case you want to compute this result for 50 terms, so you could just use y = myfunc(x, 50) or, if you want this to be a function handle, you could define
f = #(x) myfunc(x, 50);