I have the following problem:
max CEQ(w) s.t. w in (0,1) and I don't know anything about CEQ(w) except that is given by a fixed point equation of the form CEQ(w) = F(CEQ(w)). If I fix a w, I can solve the fixed point equation using the fzero function and obtain a value for CEQ. If I choose a different w, I get another value for CEQ. Thus, I could loop over all possible values of w and then choose the one that gives the highest CEQ. This seems bad programming though and I was wondering whether I can do this more efficient in MATLAB: I want to model the solution to my fixed point equation as a function of w but I don't know how to implement it.
To be more precise, here is a sample code:
clear all
clc
NoDraws = 1000000;
T_hat = 12;
mu = 0.0058;
variance = 0.0017;
rf = 0.0036;
sim_returns(:,T_hat/12) = T_hat*mu + sqrt(T_hat*variance)*randn(NoDraws,1);
A = 5;
kappa=1;
l=0;
theta = 1 - l*(kappa^(1-A) - 1) *(kappa>1);
CEQ_DA_0 = 1.1;
CEQ_opt = -1000;
w_opt = 0;
W_T = #(w) (1-w)*exp(rf*T_hat) + w*exp(rf*T_hat + sim_returns(:,T_hat/12));
for w=0.01:0.01:0.99
W=W_T(w);
fp = #(CEQ) theta*CEQ^(1-A)/(1-A) - mean( W.^(1-A)/(1-A)) + l*mean( ((kappa*CEQ)^(1-A)/(1-A) - W.^(1-A)/(1-A)) .* (W < kappa*CEQ));
CEQ_DA = fzero(fp,CEQ_DA_0);
if CEQ_DA > CEQ_opt
CEQ_opt = CEQ_DA;
w_opt = w;
end
end
That is, in the loop, I fix a w, solve the fixed point equation and store the value for CEQ. If some other w gives a bigger value for CEQ, the current optimal w gets replaced by that new w. what I would like to have (instead of the loop part) is something like this:
fp = #(CEQ,w) theta*CEQ^(1-A)/(1-A) - mean( W_T(w).^(1-A)/(1-A)) + l*mean( ((kappa*CEQ)^(1-A)/(1-A) - W_T(w).^(1-A)/(1-A)) .* (W_T(w) < kappa*CEQ));
CEQ_DA = #(w) fzero(fp,CEQ_DA_0);
[w_opt, fval]=fminbnd(CEQ_DA,0,1);
Your proposed solution is very close. In words, you're defining fp as a function of two arguments, and would like CEQ_DA to be a function of w, which solves fp for CEQ, with that given w. The only issue is that fzero doesn't know which parameter of fp to solve over, because it can't match anonymous function parameters and fp parameters by name.
The answer is yet one more anonymous function inside the fzero, to turn fp(CEP,w) into fp_w(CEP), which will be solveable for CEQ
CEQ_DA = #(w) fzero(#(CEQ) fp(CEQ, w),CEQ_DA_0);
Related
I am solving a problem in my Macroeconomics class. Consider the following equation:
Here, k is fixed and c(k) was defined through the ```interp1''' function in Matlab. Here is my code:
beta = 0.98;
delta = 0.13;
A = 2;
alpha = 1/3;
n_grid = 1000; % Number of points for capital
k_grid = linspace(5, 15, n_grid)';
tol = 1e-5;
max_it = 1000;
c0 = ones(n_grid, 1);
new_k = zeros(n_grid, 1);
dist_c = tol + 1;
it_c = 0;
while dist_c > tol && it_c < max_it
c_handle = #(k_tomorrow) interp1(k_grid, c0, k_tomorrow, 'linear', 'extrap');
for i=1:n_grid
% Solve for k'
euler = #(k_tomorrow) (1/((1-delta)* k_grid(i) + A * k_grid(i)^alpha - k_tomorrow)) - beta*(1-delta + alpha*A*k_tomorrow^(alpha - 1))/c_handle(k_prime);
new_k(i) = fzero(euler, k_grid(i)); % What's a good guess for fzero?
end
% Compute new values for consumption
new_c = A*k_grid.^alpha + (1-delta)*k_grid - new_k;
% Check convergence
dist_c = norm(new_c - c0);
c0 = new_c;
it_c = it_c + 1;
end
When I run this code, for some indexes $i$, it runs fine and fzero can find the solution. But for indexes it just returns NaN and exits without finding the root. This is a somewhat well-behaved problem in Economics and the solution we are looking indeed exists and the algorithm I tried to implement is guaranteed to work. But I don't have much experience with solving this in MATLAB and I guess I have a silly mistake somewhere. Any ideas on how to procede?
This is the typical error message:
Exiting fzero: aborting search for an interval containing a sign change
because complex function value encountered during search.
(Function value at -2.61092 is 0.74278-0.30449i.)
Check function or try again with a different starting value.
Thanks a lot in advance!
The only term that can produce complex numbers is
k'^(alpha - 1) = k'^(-2/3)
You probably want the result according to the real variant of the cube root, which you could get as
sign(k') * abs(k')^(-2/3)
or more generally and avoiding divisions by zero
k' * (1e-16+abs(k'))^(alpha - 2)
I am currently working on an assignment where I need to create two different controllers in Matlab/Simulink for a robotic exoskeleton leg. The idea behind this is to compare both of them and see which controller is better at assisting a human wearing it. I am having a lot of trouble putting specific equations into a Matlab function block to then run in Simulink to get results for an AFO (adaptive frequency oscillator). The link has the equations I'm trying to put in and the following is the code I have so far:
function [pos_AFO, vel_AFO, acc_AFO, offset, omega, phi, ampl, phi1] = LHip(theta, eps, nu, dt, AFO_on)
t = 0;
% syms j
% M = 6;
% j = sym('j', [1 M]);
if t == 0
omega = 3*pi/2;
theta = 0;
phi = pi/2;
ampl = 0;
else
omega = omega*(t-1) + dt*(eps*offset*cos(phi1));
theta = theta*(t-1) + dt*(nu*offset);
phi = phi*(t-1) + dt*(omega + eps*offset*cos(phi*core(t-1)));
phi1 = phi*(t-1) + dt*(omega + eps*offset*cos(phi*core(t-1)));
ampl = ampl*(t-1) + dt*(nu*offset*sin(phi));
offset = theta - theta*(t-1) - sym(ampl*sin(phi), [1 M]);
end
pos_AFO = (theta*(t-1) + symsum(ampl*(t-1)*sin(phi* (t-1))))*AFO_on; %symsum needs input argument for index M and range
vel_AFO = diff(pos_AFO)*AFO_on;
acc_AFO = diff(vel_AFO)*AFO_on;
end
https://www.pastepic.xyz/image/pg4mP
Essentially, I don't know how to do the subscripts, sigma, or the (t+1) function. Any help is appreciated as this is due next week
You are looking to find the result of an adaptive process therefore your algorithm needs to consider time as it progresses. There is no (t-1) operator as such. It is just a mathematical notation telling you that you need to reuse an old value to calculate a new value.
omega_old=0;
theta_old=0;
% initialize the rest of your variables
for [t=1:N]
omega[t] = omega_old + % here is the rest of your omega calculation
theta[t] = theta_old + % ...
% more code .....
% remember your old values for next iteration
omega_old = omega[t];
theta_old = theta[t];
end
I think you forgot to apply the modulo operation to phi judging by the original formula you linked. As a general rule, design your code in small pieces, make sure the output of each piece makes sense and then combine all pieces and make sure the overall result is correct.
I wanted to compute a finite difference with respect to the change of the function in Matlab. In other words
f(x+e_i) - f(x)
is what I want to compute. Note that its very similar to the first order numerical partial differentiation (forward differentiation in this case) :
(f(x+e_i) - f(x)) / (e_i)
Currently I am using for loops to compute it but it seems that Matlab is much slower than I thought. I am doing it as follows:
function [ dU ] = numerical_gradient(W,f,eps)
%compute gradient or finite difference update numerically
[D1, D2] = size(W);
dU = zeros(D1, D2);
for d1=1:D1
for d2=1:D2
e = zeros([D1,D2]);
e(d1,d2) = eps;
f_e1 = f(W+e);
f_e2 = f(W-e);
%numerical_derivative = (f_e1 - f_e2)/(2*eps);
%dU(d1,d2) = numerical_derivative
numerical_difference = f_e1 - f_e2;
dU(d1,d2) = numerical_difference;
end
end
it seems that its really difficult to vectorize the above code because for numerical differences follow the definition of the gradient and partial derivatives which is:
df_dW = [ ..., df_dWi, ...]
where df_dWi assumes the other coordinates are fixed and it only worries about the change of the variable Wi. Thus, I can't just change all the coordinates at once.
Is there a better way to do this? My intuition tells me that the best way to do this is to implement this not in matlab but in some other language, say C and then have matlab call that library. Is that true? Does it mean that the best solution is some Matlab library that does this for me?
I did see:
https://www.mathworks.com/matlabcentral/answers/332414-what-is-the-quickest-way-to-find-a-gradient-or-finite-difference-in-matlab-of-a-real-function-in-hig
but unfortunately, it computes exact derivatives, which isn't what I am looking for. I am explicitly looking for differences or "bad approximation" to the gradient.
Since it seems this code is not easy to vectorize (in fact my intuition tells me its not possible to do so) my only other idea is to implement this finite difference function in C and then have C call the function. Is this a good idea? Anyone know how to do this?
I did try reading the following:
https://www.mathworks.com/help/matlab/matlab_external/standalone-example.html
but it was too difficult to understand for me because I have no idea what a mex file is, if I need to have a arrayProduct.c file as well as a mex.h file, if I also needed a matlab file, etc. If there just existed a way to simply download a working example with all the functions they suggest there and some instructions to compile it, then it would be super helpful. But just reading the hmtl/article like that its impossible for me to infer what they want me to do.
For the sake of completness it seems reddit has some comments in its discussion of this:
https://www.reddit.com/r/matlab/comments/623m7i/how_does_one_compute_a_single_finite_differences/
Here is a more efficient doing so:
function [ vNumericalGrad ] = CalcNumericalGradient( hInputFunc, vInputPoint, epsVal )
numElmnts = size(vInputPoint, 1);
vNumericalGrad = zeros([numElmnts, 1]);
refVal = hInputFunc(vInputPoint);
for ii = 1:numElmnts
% Set the perturbation vector
refInVal = vInputPoint(ii);
vInputPoint(ii) = refInVal + epsVal;
% Compute Numerical Gradient
vNumericalGrad(ii) = (hInputFunc(vInputPoint) - refVal) / epsVal;
% Reset the perturbation vector
vInputPoint(ii) = refInVal;
end
end
This code allocate less memory.
The above code performance will be totally controlled by the speed of the hInputFunction.
The small tricks compared to original code are:
No memory reallocation of e each iteration.
Instead of addition of vectors W + e there are 2 assignments to the array.
Decreasing the calls to hInputFunction() by half by defining the reference value outside the loop (This only works for Forward / Backward difference).
Probably this will be very close to C code unless you can code in C more efficiently the function which computes the value (hInputFunction).
A full implementation can be found in StackOverflow Q44984132 Repository (It was Posted in StackOverflow Q44984132).
See CalcFunGrad( vX, hObjFun, difMode, epsVal ).
A way better approach (numerically more stable, no issue of choosing the perturbation hyperparameter, accurate up to machine precision) is to use algorithmic/automatic differentiation. For this you need the Matlab Deep Learning Toolbox. Then you can use dlgradient to compute the gradient. Below you find the source code attached corresponding to your example.
Most importantly, you can examine the error and observe that the deviation of the automatic approach from the analytical solution is indeed machine precision, while for the finite difference approach (I choose second order central differences) the error is orders of magnitude higher. For 100 points and a range of $[-10, 10]$ this errors are somewhat tolerable, but if you play a bit with Rand_Max and n_points you observe that the errors become larger and larger.
Error of algorithmic / automatic diff. is: 1.4755528111219851e-14
Error of finite difference diff. is: 1.9999999999348703e-01 for perturbation 1.0000000000000001e-01
Error of finite difference diff. is: 1.9999999632850161e-03 for perturbation 1.0000000000000000e-02
Error of finite difference diff. is: 1.9999905867860374e-05 for perturbation 1.0000000000000000e-03
Error of finite difference diff. is: 1.9664569947425062e-07 for perturbation 1.0000000000000000e-04
Error of finite difference diff. is: 1.0537897883625319e-07 for perturbation 1.0000000000000001e-05
Error of finite difference diff. is: 1.5469326944467290e-06 for perturbation 9.9999999999999995e-07
Error of finite difference diff. is: 1.3322061696937969e-05 for perturbation 9.9999999999999995e-08
Error of finite difference diff. is: 1.7059535957436630e-04 for perturbation 1.0000000000000000e-08
Error of finite difference diff. is: 4.9702408787320664e-04 for perturbation 1.0000000000000001e-09
Source Code:
f2.m
function y = f2(x)
x1 = x(:, 1);
x2 = x(:, 2);
x3 = x(:, 3);
y = x1.^2 + 2*x2.^2 + 2*x3.^3 + 2*x1.*x2 + 2*x2.*x3;
f2_grad_analytic.m:
function grad = f2_grad_analytic(x)
x1 = x(:, 1);
x2 = x(:, 2);
x3 = x(:, 3);
grad(:, 1) = 2*x1 + 2*x2;
grad(:, 2) = 4*x2 + 2*x1 + 2 * x3;
grad(:, 3) = 6*x3.^2 + 2*x2;
f2_grad_AD.m:
function grad = f2_grad_AD(x)
x1 = x(:, 1);
x2 = x(:, 2);
x3 = x(:, 3);
y = x1.^2 + 2*x2.^2 + 2*x3.^3 + 2*x1.*x2 + 2*x2.*x3;
grad = dlgradient(y, x);
CalcNumericalGradient.m:
function NumericalGrad = CalcNumericalGradient(InputPoints, eps)
% (Central, second order accurate FD)
NumericalGrad = zeros(size(InputPoints) );
for i = 1:size(InputPoints, 2)
perturb = zeros(size(InputPoints));
perturb(:, i) = eps;
NumericalGrad(:, i) = (f2(InputPoints + perturb) - f2(InputPoints - perturb)) / (2 * eps);
end
main.m:
clear;
close all;
clc;
n_points = 100;
Rand_Max = 20;
x_test_FD = rand(n_points, 3) * Rand_Max - Rand_Max/2;
% Calculate analytical solution
grad_analytic = f2_grad_analytic(x_test_FD);
grad_AD = zeros(n_points, 3);
for i = 1:n_points
x_test_dl = dlarray(x_test_FD(i,:) );
grad_AD(i,:) = dlfeval(#f2_grad_AD, x_test_dl);
end
Err_AD = norm(grad_AD - grad_analytic);
fprintf("Error of algorithmic / automatic diff. is: %.16e\n", Err_AD);
eps_range = [1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 1e-7, 1e-8, 1e-9];
for i = 1:length(eps_range)
eps = eps_range(i);
grad_FD = CalcNumericalGradient(x_test_FD, eps);
Err_FD = norm(grad_FD - grad_analytic);
fprintf("Error of finite difference diff. is: %.16e for perturbation %.16e\n", Err_FD, eps);
end
So I am trying to go through a for loop that will increment .1 every time and will do this until the another variable h is less than or equal to zero. Then I am suppose to graph this h variable along another variable x. The code that I wrote looks like this:
O = 20;
v = 200;
g = 32.2;
for t = 0:.1:12
% Calculate the height
h(t) = (v)*(t)*(sin(O))-(1/2)*(g)*(t^2);
% Calculate the horizontal location
x(t) = (v)*(t)*cos(O);
if t > 0 && h <= 0
break
end
end
The Error that I keep getting when running this code says "Attempted to access h(0); index must be a positive integer or logical." I don't understand what exactly is going on in order for this to happen. So my question is why is this happening and is there a way I can solve it, Thank you in advance.
You're using t as your loop variable as well as your indexing variable. This doesn't work, because you'll try to access h(0), h(0.1), h(0.2), etc, which doesn't make sense. As the error says, you can only access variables using integers. You could replace your code with the following:
t = 0:0.1:12;
for i = 1:length(t)
% use t(i) instead of t now
end
I will also point out that you don't need to use a for loop to do this. MATLAB is optimised for acting on matrices (and vectors), and will in general run faster on vectorised functions rather than for loops. For instance, your equation for h could be replaced with the following:
O = 20;
v = 200;
g = 32.2;
t = 0:0.1:12;
h = v * t * sin(O) - 0.5 * g * t.^2;
The only difference is that you have to use the element-wise square (.^2) rather than the normal square (^2). This means that MATLAB will square each element of the vector t, rather than multiplying the vector t by itself.
In short:
As the error says, t needs to be an integer or logical.
But your t is t=0:0.1:12, therefore a decimal value.
O = 20;
v = 200;
g = 32.2;
for t = 0:.1:12
% Calculate the height
idx_t = 1:numel(t);
h(idx_t) = (v)*(t)*(sin(O))-(1/2)*(g)*(t^2);
% Calculate the horizontal location
x(idx_t) = (v)*(t)*cos(O);
if t > 0 && h <= 0
break
end
end
Look this question's answer for more options: Subscript indices must either be real positive integers or logical error
I have a summation objective function (non-linear portfolio optimization) which looks like:
minimize w(i)*w(j)*cv(i,j) for i = 1 to 10 and j = 1 to 10
w is the decision vector
cv is a known 10 by 10 matrix
I have the formulation done for the constraints (a separate .m file for the project constraints) and for the execution of the fmincon (a separate .m file for the lower/upper bounds, initial value, and calling fmincon with the arguments).
I just can't figure out how to do the objective function. I'm used to linear programming in GLPK rather than matlab so I'm not doing so good.
I'm currently got:
ObjectiveFunction.m
function f = obj(w)
cv = [all the constants are in here]
i = 1;
j = 1;
n = 10;
var = 0;
while i <= n
while j<=n
var = var + abs(w(i)*w(j)*cv(i, j));
j = j + 1;
end
i = i + 1;
end
f = var
...but this isn't working.
Any help would be appreciated! Thanks in advance :)
So this is from a class I took a few years ago, but it addresses a very similar problem to your own with respect to use of fminsearch to optimize some values. The problem is essentially that you have a t, y, and you want a continuous exponential function to represent t, y in terms of c1*t.*exp(c2*t). The textbook I lifted the values from is called Numerical Analysis by Timothy Sauer. Unfortunately, I don’t remember the exact problem or chapter, but it’s in there somewhere.
c1 and c2 are found recursively by fminsearch minimizing a residual y - ((c1) * t .* exp((c2) * t)). Try copying and running my code below to get a feel for things:
%% Main code
clear all;
t = [1,2,3,4,5,6,7,8];
y = [8,12.3,15.5,16.8,17.1,15.8,15.2,14];
lambda0=[1 -.5];
lambda=fminunc(#expdecayfun,lambda0, ...
optimset('LargeScale','off','Display','iter','TolX',1.e-6),t,y);
c1=lambda(1);
c2=lambda(2);
fprintf('Using the BFGS method through fminunc, c1 = %e\n',c1);
fprintf('and c2 = %e. Since these values match textbook values for\n', c2);
fprintf('c1 and c2, I will stop here.\n');
%% Index of functions:
% expdecayfun
function res=expdecayfun(lambda,t,y) c1=lambda(1);
c2=lambda(2);
r=y-((c1)*t.*exp((c2)*t));
res=norm(r);
Hope this helps!