I want to optimize an objective function with dependent decision variables as below.
Sum [I * (x(i) - x(i-1) + lo(i) - g(i)) * p(i)]
Please note that the decision variable is only x(i) and x(i-1) is a value came from previous step of optimization.
I have no idea how to write this objective function. Should I use function handler? Thanks
Perhaps this is what you're asking?
Imagine you have a 3 by 1 vector x.
[x_1
x = x_2
x_3]
and you want to compute:
[x_1 [0
y= x_2 - x_1
x_3 x_2]
You could do this in Matlab with the code:
y = x - [0;x(1:end-1)];
this works since x(1:end-1) will refer to [x_1; x_2]. You can use this snippet to write your overall objective function.
Related
(Crossposted from Matlab Answers) I am trying to simplify this set of algebraic equations. Then, I would like to have Matlab calculate the Jacobian for me. But it does not seem to work as I expect.
Consider this simple MWE:
% State Variables
syms x_0 x_1 x_2 x_3
% Input Variables
syms u_1 u_2 u_3
% Constants
syms k_1 V_liq dvs
% 3 Algebraic Equations
stateEquations = [...
x_1 == (x_0*(V_liq - u_1/dvs*1e3)*1e-3 + u_1)*1e3/V_liq*exp(-k_1), ...
x_2 == (x_1*(V_liq - u_2/dvs*1e3)*1e-3 + u_2)*1e3/V_liq*exp(-k_1), ...
x_3 == (x_2*(V_liq - u_3/dvs*1e3)*1e-3 + u_3)*1e3/V_liq*exp(-k_1)];
dstate_x3 = solve(stateEquations, x_3)
dstate_du = jacobian(dstate_x3, [u_1 u_2 u_3])
Since dstate_x3 is empty, the Jacobian is also empty. But I simply want Matlab to replace x_2 in eq. 3 by its right-hand side, and x_1 by its right-hand side...
Could you please give me a hint on how to achieve this with Symbolic Math Toolbox? (Deriving it manually would be very time-consuming, especially with x_i, i > 3)
Since your system has 3 equations, you must solve it for 3 variables, not just for the variable x_3. Because solve doesn't know which variables you want so solve your system for, then it returns an empty solution.
You want to solve for x_1, x_2 and x_3, so replace the penultimate line of your code by
dstate = solve(stateEquations, [x_1 x_2 x_3])
Now dstate is an 1x1 struct with 3 sym fields: x_1, x_2 and x_3. Hence, replace the last line of your code by
dstate_du = jacobian(dstate.x_3, [u_1 u_2 u_3])
Eventually, you might want to simplify(dstate_du).
I have data like this:
y = [0.001
0.0042222222
0.0074444444
0.0106666667
0.0138888889
0.0171111111
0.0203333333
0.0235555556
0.0267777778
0.03]
and
x = [3.52E-06
9.72E-05
0.0002822918
0.0004929136
0.0006759156
0.0008199029
0.0009092797
0.0009458332
0.0009749509
0.0009892005]
and I want y to be a function of x with y = a(0.01 − b*n^−cx).
What is the best and easiest computational approach to find the best combination of the coefficients a, b and c that fit to the data?
Can I use Octave?
Your function
y = a(0.01 − b*n−cx)
is in quite a specific form with 4 unknowns. In order to estimate your parameters from your list of observations I would recommend that you simplify it
y = β1 + β2β3x
This becomes our objective function and we can use ordinary least squares to solve for a good set of betas.
In default Matlab you could use fminsearch to find these β parameters (lets call it our parameter vector, β), and then you can use simple algebra to get back to your a, b, c and n (assuming you know either b or n upfront). In Octave I'm sure you can find an equivalent function, I would start by looking in here: http://octave.sourceforge.net/optim/index.html.
We're going to call fminsearch, but we need to somehow pass in your observations (i.e. x and y) and we will do that using anonymous functions, so like example 2 from the docs:
beta = fminsearch(#(x,y) objfun(x,y,beta), beta0) %// beta0 are your initial guesses for beta, e.g. [0,0,0] or [1,1,1]. You need to pick these to be somewhat close to the correct values.
And we define our objective function like this:
function sse = objfun(x, y, beta)
f = beta(1) + beta(2).^(beta(3).*x);
err = sum((y-f).^2); %// this is the sum of square errors, often called SSE and it is what we are trying to minimise!
end
So putting it all together:
y= [0.001; 0.0042222222; 0.0074444444; 0.0106666667; 0.0138888889; 0.0171111111; 0.0203333333; 0.0235555556; 0.0267777778; 0.03];
x= [3.52E-06; 9.72E-05; 0.0002822918; 0.0004929136; 0.0006759156; 0.0008199029; 0.0009092797; 0.0009458332; 0.0009749509; 0.0009892005];
beta0 = [0,0,0];
beta = fminsearch(#(x,y) objfun(x,y,beta), beta0)
Now it's your job to solve for a, b and c in terms of beta(1), beta(2) and beta(3) which you can do on paper.
The problem in the link:
can be integrated analytically and the answer is 4, however I'm interested in integrating it numerically using Matlab, because it's similar in form to a problem that I can't integrate analytically. The difficulty in the numerical integration arises because the function in the two inner integrals is a function of x,y and z and z can't be factored out.
by no means, this is elegant. hope someone can make better use of matlab functions than me. i have tried the brute force way just to practice numerical integration. i have tried to avoid the pole in the inner integral at z=0 by exploiting the fact that it is also being multiplied by z. i get 3.9993. someone must get better solution by using something better than trapezoidal rule
function []=sofn
clear all
global x y z xx yy zz dx dy
dx=0.05;
x=0:dx:1;
dy=0.002;
dz=0.002;
y=0:dy:1;
z=0:dz:2;
xx=length(x);
yy=length(y);
zz=length(z);
s1=0;
for i=1:zz-1
s1=s1+0.5*dz*(z(i+1)*exp(inte1(z(i+1)))+z(i)*exp(inte1(z(i))));
end
s1
end
function s2=inte1(localz)
global y yy dy
if localz==0
s2=0;
else
s2=0;
for j=1:yy-1
s2=s2+0.5*dy*(inte2(y(j),localz)+inte2(y(j+1),localz));
end
end
end
function s3=inte2(localy,localz)
global x xx dx
s3=0;
for k=1:xx-1
s3=s3+0.5*dx*(2/(localy+localz));
end
end
Well, this is strange, because on the poster's similar previous question I claimed this can't be done, and now after having looked at Guddu's answer I realize its not that complicated. What I wrote before, that a numerical integration results in a number but not a function, is true – but beside the point: One can just define a function that evaluates the integral for every given parameter, and this way effectively one does have a function as a result of a numerical integration.
Anyways, here it goes:
function q = outer
f = #(z) (z .* exp(inner(z)));
q = quad(f, eps, 2);
end
function qs = inner(zs)
% compute \int_0^1 1 / (y + z) dy for given z
qs = nan(size(zs));
for i = 1 : numel(zs)
z = zs(i);
f = #(y) (1 ./ (y + z));
qs(i) = quad(f, 0 , 1);
end
end
I applied the simplification suggested by myself in a comment, eliminating x. The function inner calculates the value of the inner integral over y as a function of z. Then the function outer computes the outer integral over z. I avoid the pole at z = 0 by letting the integration run from eps instead of 0. The result is
4.00000013663955
inner has to be implemented using a for loop because a function given to quad needs to be able to return its value simultaneously for several argument values.
We have an equation similar to the Fredholm integral equation of second kind.
To solve this equation we have been given an iterative solution that is guaranteed to converge for our specific equation. Now our only problem consists in implementing this iterative prodedure in MATLAB.
For now, the problematic part of our code looks like this:
function delta = delta(x,a,P,H,E,c,c0,w)
delt = #(x)delta_a(x,a,P,H,E,c0,w);
for i=1:500
delt = #(x)delt(x) - 1/E.*integral(#(xi)((c(1)-c(2)*delt(xi))*ms(xi,x,a,P,H,w)),0,a-0.001);
end
delta=delt;
end
delta_a is a function of x, and represent the initial value of the iteration. ms is a function of x and xi.
As you might see we want delt to depend on both x (before the integral) and xi (inside of the integral) in the iteration. Unfortunately this way of writing the code (with the function handle) does not give us a numerical value, as we wish. We can't either write delt as two different functions, one of x and one of xi, since xi is not defined (until integral defines it). So, how can we make sure that delt depends on xi inside of the integral, and still get a numerical value out of the iteration?
Do any of you have any suggestions to how we might solve this?
Using numerical integration
Explanation of the input parameters: x is a vector of numerical values, all the rest are constants. A problem with my code is that the input parameter x is not being used (I guess this means that x is being treated as a symbol).
It looks like you can do a nesting of anonymous functions in MATLAB:
f =
#(x)2*x
>> ff = #(x) f(f(x))
ff =
#(x)f(f(x))
>> ff(2)
ans =
8
>> f = ff;
>> f(2)
ans =
8
Also it is possible to rebind the pointers to the functions.
Thus, you can set up your iteration like
delta_old = #(x) delta_a(x)
for i=1:500
delta_new = #(x) delta_old(x) - integral(#(xi),delta_old(xi))
delta_old = delta_new
end
plus the inclusion of your parameters...
You may want to consider to solve a discretized version of your problem.
Let K be the matrix which discretizes your Fredholm kernel k(t,s), e.g.
K(i,j) = int_a^b K(x_i, s) l_j(s) ds
where l_j(s) is, for instance, the j-th lagrange interpolant associated to the interpolation nodes (x_i) = x_1,x_2,...,x_n.
Then, solving your Picard iterations is as simple as doing
phi_n+1 = f + K*phi_n
i.e.
for i = 1:N
phi = f + K*phi
end
where phi_n and f are the nodal values of phi and f on the (x_i).
I want to evaluate a double integral of the form
$$\int_{-\infty}^a \int_{-\infty}^b \sum_{i,j}^K a_ia_jx^iy^j\exp(-x^2 - y^2 + xy)dx dy $$
where $a_i$ and $a_j$ are constants. Since the integral is linear, I can interchange summation and integration, but in this case I have to evaluate $K^2$ integrals and it takes too long. In that case I do the following:
for i = 1:K
for j = 1:K
fun = #(x,y) x.^i.*y.^j.*exp(-2.*(x.^2 + y.^2 - 2.*x.*y))
part(i,j) = alpha(i)*alpha(j)*integral2(fun,-inf,a,-inf,b)
end
end
It takes too long, so I want to evaluate only one integral, but I don't know how to vectorize $\sum_{i,j}^K a_ia_jx^iy^j\exp(-x^2 - y^2 + xy)$, namely, how to supply it to integral2. I would be very grateful for any help.
It looks like you'll need to have i and j be third and fourth dimensions, in order for there to be a chance that the code will work.
I also don't have integral2 (I use octave, integral2 is a new matlab function that octave doesn't yet have), so I can't test it, but I'd think something like this might work:
alphaset=zeros(1,1,K,K);
alphaset(1,1,1:K,1:K)=alpha(1:K)'*alpha(1:K);
i_set=zeros(1,1,K,1);
j_set=zeros(1,1,1,K);
i_set(:)=1:K;
j_set(:)=1:K;
fun=#(x,y) x.^i_set.*y.^j_set.*exp(-2.*(x.^2 + y.^2 - 2.*x.*y));
part = squeeze(alphaset.*integral2(fun,-inf,a,-inf,b));
As I said, I can't promise that it'll work, because I don't know how integral2 works. But if you replace the integral2 with simply "sum(sum(fun([1,2,4],[3,-1,2])))", then it works as intended for that operation (that is, it sums over the x and y values, and the result is a matrix over the set of indices).
If you just want to improve speed, you may try parfor.
Let $X=(x,x^2,\cdots,x^K)$, $Y=(y,y^2,\cdots,y^K)$, $A=(a_{ij})$ be a matrix with $a_{ij}=a_{i}a_{j}$, then
$$\sum_{i,j}^K a_{i}a_{j}x^iy^j=XAY^{T}$$
I don't have integral2 function on my matlab, so I didn't test if it will improve the speed a lot.
Also, I think you need to use syms x and y, after you compute the $$XAY^{T}$$, then use matlabFunction to convert symbolic expression to function handle. Here it is my test code: syms x y; X=[x,x^2]; Y=[y,y^2]; Z=X*Y'; fun =matlabFunction(Z); ff=#(x,y) x^2+y^2; gg=fun(x,y).*ff(x,y);