MATLAB - How to define a multivariable goal optimization - matlab

Im trying to understand multivariable goal optimization, I will need to optimize complex functions, but to start, I need to optimize the following function:
function ap_phase = objecfun(tau)
f = 1000; %Frequency
w = 2*pi*f; %Angular Frequency
trans_func = #(taux) (1-1i*w*taux)./(1+1i*w*taux); %Transfer function
trans_zero = trans_func(tau(1)); %Transfer function evaluated with the first variable
trans_quad = trans_func(tau(2)); %Transfer function evaluated with the second variable
ap_phase = rad2deg(phase(trans_zero)-phase(trans_quad)); %Phase difference
end
The function objecfun takes one vector of length 2 as an input, computes 2 transfer functions, then substracts the phase of the transfer functions.
My goal is that the phase should be around 90°
The script im using to make the optimization is the following
tau0 = [2E-5, 1E-3]; %Initial Value for tau(1) and tau(2)
lb = [1E-7, 1E-7]; %Lower bound for tau(1) and tau(2)
ub = [1E-2, 1E-2]; %Upper bound for tau(1) and tau(2)
goal = 90; %Optimization goal
weight = 1; %Weight
[x,fval] = fgoalattain(#objecfun,tau0,goal,weight,[],[],[],[],lb,ub)
The optimizer converges but im getting a wrong answer, im getting
x =
0.0100 0.0000
fval =
-178.1044
That's wrong, fval should be near 90°
What am I doing wrong?

I think you need to replace your objective function and goal value so that it fit the problem formulation. You can use as an objective function say the L2 norm of the difference between your function output and your desired angle, and set the goal as some tolerance.
I checked also with "fmincon":
new_goal = 1e-4;
objectfun = #(x) norm(objecfun(x) - goal);
options = optimoptions('fgoalattain');
options.PlotFcns = 'optimplotfval';
[tau_star,fval] = fgoalattain(objectfun,tau0,new_goal,weight,[],[],[],[],lb,ub,[],options);
options = optimoptions('fmincon');
options.PlotFcns = 'optimplotfval';
[tau_star2,fval,exitflag,output] = fmincon(objectfun,tau0,[],[],[],[],lb,ub,[], options);
fgoalattain_solution_phase_diff = objecfun(tau_star)
fmincon_solution_phase_diff = objecfun(tau_star2)
And got:
fgoalattain_solution_phase_diff =
90.0000
fmincon_solution_phase_diff =
90.0006
Note: you can also omit the rad2deg in your function and use as the desired angle its value in [rad].

Related

How to integrate this function in MATALAB

Looking to integrate this function!
are all given as stated above. I was hoping to find the variable "initial"
The function is equal to zero. Not sure how to go about this in Matlab. Yes, my integral bounds are from initial to pi/2. The integral is in respect to theta so it is d(theta).
F = .00024;
E = 10^6; %mPa
I = 6.1*10^-9;
L = .01;
func = #(intial, theta) L/sqrt((E*I)/(2*F)) - int(1/sqrt(cos(initial)-
cos(theta)),initial,pi/2); % function
oo = fzero(func,x0);

SIR model using fsolve and Euler 3BDF

Hi i've been asked to solve SIR model using fsolve command in MATLAB, and Euler 3 point backward. I'm really confused on how to proceed, please help. This is what i have so far. I created a function for 3BDF scheme but i'm not sure how to proceed with fsolve and solve the system of nonlinear ODEs. The SIR model is shown as and 3BDF scheme is formulated as
clc
clear all
gamma=1/7;
beta=1/3;
ode1= #(R,S,I) -(beta*I*S)/(S+I+R);
ode2= #(R,S,I) (beta*I*S)/(S+I+R)-I*gamma;
ode3= #(I) gamma*I;
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
R0=0;
I0=10;
S0=8e6;
odes={ode1;ode2;ode3}
fun = #root2d;
x0 = [0,0];
x = fsolve(fun,x0)
function [xs,yb] = ThreePointBDF(f,x0, xmax, h, y0)
% This function should return the numerical solution of y at x = xmax.
% (It should not return the entire time history of y.)
% TO BE COMPLETED
xs=x0:h:xmax;
y=zeros(1,length(xs));
y(1)=y0;
yb(1)=y0+f(x0,y0)*h;
for i=1:length(xs)-1
R =R0;
y1(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - R, y1(i-1,:)+2*h*F(i,:))
S = S0;
y2(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - S, y2(i-1,:)+2*h*F(i,:))
I= I0;
y3(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - I, y3(i-1,:)+2*h*F(i,:))
end
end
You have an implicit equation
y(i+1) - 2*h/3*f(t(i+1),y(i+1)) = G = (4*y(i) - y(i-1))/3
where the right-side term G is constant in the call to fsolve, that is, during the solution of the implicit step equation.
Note that this is for the vector valued system y'(t)=f(t,y(t)) where
f(t,[S,I,R]) = [-(beta*I*S)/(S+I+R); (beta*I*S)/(S+I+R)-I*gamma; gamma*I];
To solve this write
G = (4*y(i,:) - y(i-1,:))/3
y(i+1,:) = fsolve(#(u) u-2*h/3*f(t(i+1),u) - G, y(i-1,:)+2*h*F(i,:))
where a midpoint step is used to get an order 2 approximation as initial guess, F(i,:)=f(t(i),y(i,:)). Add solver options for error tolerances as necessary, you want the error in the implicit equation smaller than the truncation error O(h^3) of the step. One can also keep only a short array of function values, then one has to be careful for the correspondence of the position in the short array to the time index.
Using all that and a reference solution by a higher order standard solver produces the following error graphs for the components
where one can see that the first order error of the constant first step results in a first order global error, while with a second order error in the first step using the Euler method results in a clear second order global error.
Implement the method in general terms
from scipy.optimize import fsolve
def BDF2(f,t,y0,y1):
N, h = len(t)-1, t[1]-t[0];
y = (N+1)*[np.asarray(y0)];
y[1] = y1;
for i in range(1,N):
t1, G = t[i+1], (4*y[i]-y[i-1])/3
y[i+1] = fsolve(lambda u: u-2*h/3*f(t1,u)-G, y[i-1]+2*h*f(t[i],y[i]), xtol=1e-3*h**3)
return np.vstack(y)
Set up the model to be solved
gamma=1/7;
beta=1/3;
print beta, gamma
y0 = np.array([8e6, 10, 0])
P = sum(y0); y0 = y0/P
def f(t,y): S,I,R = y; trns = beta*S*I/(S+I+R); recv=gamma*I; return np.array([-trns, trns-recv, recv])
Compute a reference solution and method solutions for the two initialization variants
from scipy.integrate import odeint
tg = np.linspace(0,120,25*128)
yg = odeint(f,y0,tg,atol=1e-12, rtol=1e-14, tfirst=True)
M = 16; # 8,4
t = tg[::M];
h = t[1]-t[0];
y1 = BDF2(f,t,y0,y0)
e1 = y1-yg[::M]
y2 = BDF2(f,t,y0,y0+h*f(0,y0))
e2 = y2-yg[::M]
Plot the errors, computation as above, but embedded in the plot commands, could be separated in principle by first computing a list of solutions
fig,ax = plt.subplots(3,2,figsize=(12,6))
for M in [16, 8, 4]:
t = tg[::M];
h = t[1]-t[0];
y = BDF2(f,t,y0,y0)
e = (y-yg[::M])
for k in range(3): ax[k,0].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
y = BDF2(f,t,y0,y0+h*f(0,y0))
e = (y-yg[::M])
for k in range(3): ax[k,1].plot(t,e[:,k],'-o', ms=1, lw=0.5, label = "h=%.3f"%h)
for k in range(3):
for j in range(2): ax[k,j].set_ylabel(["$e_S$","$e_I$","$e_R$"][k]); ax[k,j].legend(); ax[k,j].grid()
ax[0,0].set_title("Errors: first step constant");
ax[0,1].set_title("Errors: first step Euler")

How to use dynamic function name in MATLAB?

I am trying to optimize the following program by using for loops
t = 0:0.1:100;
conc = rand(size(t));
syms x
equ_1(x) = 10*x.^2+1;
equ_2(x) = 5*x.^3+10*x.^2;
equ_3(x) = 5*x.^3+10*x.^2;
y_1 = equ_1(conc);
y_2 = equ_2(conc);
y_3 = equ_3(conc);
p_1 = polyfit(t,y_1,1);
p_2 = polyfit(t,y_2,1);
p_3 = polyfit(t,y_3,1);
yfit_1 = p_1(1)*conc+p_1(2);
yfit_2 = p_2(1)*conc+p_2(2);
yfit_3 = p_2(1)*conc+p_2(2);
rms_er_1 = double(sqrt((sum((yfit_1-y_1).^2)./length(yfit_1))));
rms_er_2 = double(sqrt((sum((yfit_2-y_2).^2)./length(yfit_2))));
rms_er_3 = double(sqrt((sum((yfit_3-y_3).^2)./length(yfit_3))));
rms = [rms_er_1 rms_er_2 rms_er_3]
In this program. I have many equations and I can write them manually like equ_1(x),equ_1(x),equ_1(x) etc. After writing equations, will it be possible to write remaining programs by using for loops?
Can anyone help?
Yes, it is possible. You can pack your functions in a cell array and give your values as parameters while looping over this cell array
t = (0:0.1:100)';
conc = rand(size(t));
% Packing your function handles in a cell array ( I do not have the
% symbolic math toolbox, so I used function handles here. In your case you
% have to pack your equations equ_n(x) in between the curly brackets{} )
allfuns = {#(x) 10*x.^2+1, ...
#(x) 5*x.^3+10*x.^2, ...
#(x) 5*x.^3+10*x.^2};
% Allocate memory
y = zeros(length(t), length(allfuns));
p = zeros(2,length(allfuns));
yfit = zeros(length(t), length(allfuns));
rms = zeros(1, length(allfuns));
% Loop over all functions the cell, applying your functional chain
for i=1:length(allfuns)
y(:,i) = allfuns{i}(t);
p(:,i) = polyfit(t,y(:,i),1);
yfit(:,i) = p(1,i)*conc+p(2,i);
rms(:,i) = double(sqrt((sum((yfit(:,i)-y(:,i)).^2)./ ...
length(yfit(:,i)))));
end
This leads to
>> rms
rms =
1.0e+06 *
0.0578 2.6999 2.6999
You can expand that to an arbitrary number of equations in allfuns.
Btw: You are fitting 1st order polynomials with polyfit to values calculated with 2nd and 3rd order functions. This leads of course to rough fits with high rms. I do not know how your complete problem looks like, but you could define an array poly_orders containing the polynomial order of each function in allfuns. If you give those values as parameter to the polyfit function in the loop, your fits will work way better.
You can try cellfun
Here is an example.
Define in a .m
function y = your_complex_operation(f,x, t)
y_1 = f(x);
p_1 = polyfit(t,y_1,1);
yfit_1 = p_1(1)*x+p_1(2);
y = double(sqrt((sum((yfit_1-y_1).^2)./length(yfit_1))));
end
Then use cellfunc
funs{1}=#(x) 10*x.^2+1;
funs{2}=#(x) 5*x.^3+10*x.^2;
funs{3}=#(x) 5*x.^3+10*x.^2;
%as many as you need
t = 0:0.1:100;
conc = rand(size(t));
funs_res = cellfun(#(c) your_complex_operation(c,conc,t),funs);

Trying to minimise a function wrt 2 variables for robust portfolio optimisation. How to do this with fmincon?

I am currently involved in a group project where we have to conduct portfolio selection and optimisation. The paper being referenced is given here: (specifically page 5 and 6, equations 7-10)
http://faculty.london.edu/avmiguel/DeMiguel-Nogales-OR.pdf
We are having trouble creating the optimisation problem using M-Portfolios, given below
min (wrt w,m) (1/T) * sum_(rho)*(w'*r_t - m) (Sorry I couldn't get the formatting to work)
s.t. w'e = 1 (just a condition saying that all weights add to 1)
So far, this is what we have attempted:
function optPortfolio = portfoliofminconM(returns,theta)
% Compute the inputs of the mean-variance model
mu = mean(returns)';
sigma = cov(returns);
% Inputs for the fmincon function
T = 120;
n = length(mu);
w = theta(1:n);
m = theta((n+1):(2*n));
c = 0.01*ones(1,n);
Aeq = ones(1,(2*n));
beq = 1;
lb = zeros(2,n);
ub = ones(2,n);
x0 = ones(n,2) / n; % Start with the equally-weighted portfolio
options = optimset('Algorithm', 'interior-point', ...
'MaxIter', 1E10, 'MaxFunEvals', 1E10);
% Nested function which is used as the objective function
function objValue = objfunction(w,m)
cRp = (w'*(returns - (ones(T,1)*m'))';
objValue = 0;
for i = 1:T
if abs(cRp(i)) <= c;
objValue = objValue + (((cRp(i))^2)/2);
else
objValue = objValue + (c*(abs(cRp(i))-(c/2)));
end
end
The problem starts at our definitions for theta being used as a vector of w and m. We don't know how to use fmincon with 2 variables in the objective function properly. In addition, the value of the objective function is conditional on another value (as shown in the paper) and this needs to be done over a rolling time window of 120 months for a total period of 264 months.(hence the for-loop and if-else)
If any more information is required, I will gladly provide it!
If you can additionally provide an example that deals with a similar problem, can you please link us to it.
Thank you in advance.
The way you minimize a function of two scalars with fmincon is to write your objective function as a function of a single, two-dimensional vector. For example, you would write f(x,y) = x.^2 + 2*x*y + y.^2 as f(x) = x(1)^2 + 2*x(1)*x(2) + x(2)^2.
More generally, you would write a function of two vectors as a function of a single, large vector. In your case, you could rewrite your objfunction or do a quick hack like:
objfunction_for_fmincon = #(x) objfunction(x(1:n), x(n+1:2*n));

Numerical Integral in MatLab using integral command

I am trying to compute the value of this integral using Matlab
Here the other parameters have been defined or computed in the earlier part of the program as follows
N = 2;
sigma = [0.01 0.1];
l = [15];
meu = 4*pi*10^(-7);
f = logspace ( 1, 6, 500);
w=2*pi.*f;
for j = 1 : length(f)
q2(j)= sqrt(sqrt(-1)*2*pi*f(j)*meu*sigma(2));
q1(j)= sqrt(sqrt(-1)*2*pi*f(j)*meu*sigma(1));
C2(j)= 1/(q2(j));
C1(j)= (q1(j)*C2(j) + tanh(q1(j)*l))/(q1(j)*(1+q1(j)*C2(j)*tanh(q1(j)*l)));
Z(j) = sqrt(-1)*2*pi*f(j)*C1(j);
Apprho(j) = meu*(1/(2*pi*f(j))*(abs(Z(j))^2));
Phi(j) = atan(imag(Z(j))/real(Z(j)));
end
%integration part
c1=w./(2*pi);
rho0=1;
fun = #(x) log(Apprho(x)/rho0)/(x.^2-w^2);
c2= integral(fun,0,Inf);
phin=pi/4-c1.*c2;
I am getting an error like this
could anyone help and tell me where i am going wrong.thanks in advance
Define Apprho in a separate *.m function file, instead of storing it in an array:
function [ result ] = Apprho(x)
%
% Calculate f and Z based on input argument x
%
% ...
%
meu = 4*pi*10^(-7);
result = meu*(1/(2*pi*f)*(abs(Z)^2));
end
How you calculate f and Z is up to you.
MATLAB's integral works by calling the function (in this case, Apprho) repeatedly at many different x values. The x values called by integral don't necessarily correspond to the 1: length(f) values used in your original code, which is why you received errors.