fmincon: objective function for this? - matlab

I'm writing a program to optimize a packing problem using Robert Lang's algorithm from his webpage: https://langorigami.com/wp-content/uploads/2015/09/ODS1e_Algorithms.pdf
This is used to design origami models and it's a problem from circle packing with constraints and different radius.
I have created an example with 5 nodes, where 4 of them are terminal.
Looking at these definitions included in the previous algorithm, I want to work with the Scale Optimization (A.3).
According to the notation, in my example I have:
E = {e1, e2, e3, e4,} ---> Edges
U = {u1, u2. u3, u4, u5} ---> Vertices (Each one with x and y coordinates)
Ut = {u2, u3, u4, u5} ---> Terminal Vertices
P = {[u1, u2], [u1, u3], [u1, u4], [u1, u5],
[u2, u3], [u2, u4], [u2, u5],
[u3, u4], [u4, u5]} ---> All paths between vertices from U
Pt = {[u2, u3], [u2, u4], [u2, u5],
[u3, u4], [u3, u5],
[u4, u5]} ---> All paths between terminal vertices Ut
And if I assume that σ is equal to 0:
L = {100, 50, 100, 100,
150, 200, 200,
150, 150,
200} --> Length of the paths P
Note that the length of each path is in the same position of P.
So now, with all this defined, I have to optimize the scale.
For that, I should:
Minimize (-m) over {m,ui ∈ Ut} s.t.: (A-2)
1.- 0 ≤ ui,x ≤ w for all ui ∈Ut (A-3)
2.- 0 ≤ ui,y ≤ h for all ui ∈Ut (A-4)
3.- (A-5) from webpage I provided.
So, in the example, merging (A-3) and (A-4) we can set the Lb and Ub for the fmincon function such as:
Lb = zeros(1,8)
Ub = [ones(1,4)*w, ones(1,4)*h]
The input x for the fmincon function, due to refusing matrix, I'm using it like:
X = [x1 x2 x3 x4 y1 y2 y3 y4], that's why Lb and Ub has that form.
About the (A-5), we get this set of inequalities:
m*150 - sqrt((x(2)-x(3))^2 + (y(2)-y(3))^2) <= 0
m*200 - sqrt((x(2)-x(4))^2 + (y(2)-y(4))^2) <= 0
m*200 - sqrt((x(2)-x(5))^2 + (y(2)-y(5))^2) <= 0
m*150 - sqrt((x(3)-x(4))^2 + (y(3)-y(4))^2) <= 0
m*150 - sqrt((x(3)-x(5))^2 + (y(3)-y(5))^2) <= 0
m*200 - sqrt((x(4)-x(5))^2 + (y(4)-y(5))^2) <= 0
My main program is testFmincon.m:
[x,fval,exitflag] = Solver(zeros(1,10),zeros(1,10),ones(1,10)*700);
%% w = h = 700
I implemented the optimizer in the file Solver.m:
function [x,fval,exitflag,output,lambda,grad,hessian] = Solver(x0, lb, ub)
%% This is an auto generated MATLAB file from Optimization Tool.
%% Start with the default options
options = optimoptions('fmincon');
%% Modify options setting
options = optimoptions(options,'Display', 'off');
options = optimoptions(options,'Algorithm', 'sqp');
[x,fval,exitflag,output,lambda,grad,hessian] = ...
fmincon(#Objs,x0,[],[],[],[],lb,ub,#Objs2,options);
Where Objs2.m is:
function [c, ceq] = Objs2(xy)
length = size(xy);
length = length(2);
x = xy(1,1:length/2);
y = xy(1,(length/2)+1:length);
c(1) = sqrt((x(2) - x(3))^2 + (y(2)-y(3))^2) - m*150;
c(2) = sqrt((x(2) - x(4))^2 + (y(2)-y(4))^2) - m*200;
c(3) = sqrt((x(2) - x(5))^2 + (y(2)-y(5))^2) - m*200;
c(4) = sqrt((x(3) - x(4))^2 + (y(3)-y(4))^2) - m*150;
c(5) = sqrt((x(3) - x(5))^2 + (y(3)-y(5))^2) - m*150;
c(6) = sqrt((x(4) - x(5))^2 + (y(4)-y(5))^2) - m*200;
ceq=[x(1) y(1)]; %% x1 and y1 are 0.
end
And Objs.m file is:
function c = Objs(xy)
length = size(xy);
length = length(2);
x = xy(1,1:length/2);
y = xy(1,(length/2)+1:length);
c(1) = sqrt((x(2) - x(3))^2 + (y(2)-y(3))^2) - m*150;
c(2) = sqrt((x(2) - x(4))^2 + (y(2)-y(4))^2) - m*200;
c(3) = sqrt((x(2) - x(5))^2 + (y(2)-y(5))^2) - m*200;
c(4) = sqrt((x(3) - x(4))^2 + (y(3)-y(4))^2) - m*150;
c(5) = sqrt((x(3) - x(5))^2 + (y(3)-y(5))^2) - m*150;
c(6) = sqrt((x(4) - x(5))^2 + (y(4)-y(5))^2) - m*200;
c = m*sum(c);
end
But I doesn't work correctly, I think I'm using wrong the fmincon function.
I don't know neither how to optimize (-m) ... should I use syms m or something like that?
edit: The output like this is always [0 0 0 0 0 0 0 0 0 0] when it shouldn't. See the output here.
Thank you so much in advice.

Some more observations.
We can simplify things a little bit by getting rid of the square root. So the constraints look like:
set c = {x,y}
maximize m2
m2 * sqrpathlen(ut,vt) <= sum(c, sqr(pos(ut,c)-pos(vt,c))) for all paths ut->vt
where m2 is the square of m.
This is indeed non-convex. With a global solver I get as solution:
---- 83 VARIABLE m2.L = 12.900 m^2, with m=scale
PARAMETER m = 3.592
---- 83 VARIABLE pos.L positions
x y
u3 700.000 700.000
u4 700.000 161.251
u5 161.251 700.000
(pos(u2,x) and pos(u2,y) are zero).
With a local solver using 0 as starting point, we see we don't move at all:
---- 83 VARIABLE m2.L = 0.000 m^2, with m=scale
PARAMETER m = 0.000
---- 83 VARIABLE pos.L positions
( ALL 0.000 )

Related

Generating a non-uniform 1D mesh in MATLAB

I am trying to generate a non-uniform 1D mesh with constant stretching by a value r between 0 and 1.
This is the code I've tried but I can't seem to get this to work. The final value is never 1 and I'm not sure if this is because the number of indices needs to change so that the total distance remains the same. I'm new to this, I've never had to make any kind of unstructured mesh before so any help would be really great!
n = 20; % number of indices
h = 1/(n-1); % unstretched grid spacing
r = .9; % stretching factor
x2 = zeros(n,1);
for i=2:n
x2(i) = x2(i-1)+r^(i-2)*h;
end
If you want to place n nodes in geometric progression between 0 and 1 with ratio r, then the nodes will be placed at
x(1) = 0
x(2) = h
x(3) = h + r*h
x(4) = h + r*h + r^2*h
...
x(n) = h*(1 + r + r^3 + ... + r^(n-2)) = 1
where we can determine h as
h = 1/sum(r^j, j = 0..(n-2)) = (r - 1)/(r^(n-1) - 1)
We can then place all n nodes:
h = (r - 1)/(r^(n-1) - 1); % 1st grid spacing
x = [0, h*cumsum(r.^(0:(n-2)))];
Solution for n = 5 and r = 0.9:
x =
0.00000 0.29078 0.55249 0.78802 1.00000

How to iterate over functions?

I would like to apply loop over a function. I have the following "mother" code:
v = 1;
fun = #root;
x0 = [0,0]
options = optimset('MaxFunEvals',100000,'MaxIter', 10000 );
x = fsolve(fun,x0, options)
In addition, I have the following function in a separate file:
function D = root(x)
v = 1;
D(1) = x(1) + x(2) + v - 2;
D(2) = x(1) - x(2) + v - 1.8;
end
Now, I would like to find roots when v = sort(rand(1,1000)). In other words, I would like to iterate over function for each values of v.
You will need to modify root to accept an additional variable (v) and then change the function handle to root to an anonymous function which feeds in the v that you want
function D = root(x, v)
D(1) = x(1) + x(2) + v - 2;
D(2) = x(1) - x(2) + v - 1.8;
end
% Creates a function handle to root using a specific value of v
fun = #(x)root(x, v(k))
Just in case that equation is your actual equation (and not a dummy example): that equation is linear, meaning, you can solve it for all v with a simple mldivide:
v = sort(rand(1,1000));
x = [1 1; 1 -1] \ bsxfun(#plus, -v, [2; 1.8])
And, in case those are not your actual equations, you don't need to loop, you can vectorize the whole thing:
function x = solver()
options = optimset('Display' , 'off',...
'MaxFunEvals', 1e5,...
'MaxIter' , 1e4);
v = sort(rand(1, 1000));
x0 = repmat([0 0], numel(v), 1);
x = fsolve(#(x)root(x,v'), x0, options);
end
function D = root(x,v)
D = [x(:,1) + x(:,2) + v - 2
x(:,1) - x(:,2) + v - 1.8];
end
This may or may not be faster than looping, it depends on your actual equations.
It may be slower because fsolve will need to compute a Jacobian of 2000×2000 (4M elements), instead of 2×2, 1000 times (4k elements).
But, it may be faster because the startup cost of fsolve can be large, meaning, the overhead of many calls may in fact outweigh the cost of computing the larger Jacobian.
In any case, providing the Jacobian as a second output will speed everything up rather enormously:
function solver()
options = optimset('Display' , 'off',...
'MaxFunEvals', 1e5,...
'MaxIter' , 1e4,...
'Jacobian' , 'on');
v = sort(rand(1, 1000));
x0 = repmat([1 1], numel(v), 1);
x = fsolve(#(x)root(x,v'), x0, options);
end
function [D, J] = root(x,v)
% Jacobian is constant:
persistent J_out
if isempty(J_out)
one = speye(numel(v));
J_out = [+one +one
+one -one];
end
% Function values at x
D = [x(:,1) + x(:,2) + v - 2
x(:,1) - x(:,2) + v - 1.8];
% Jacobian at x:
J = J_out;
end
vvec = sort(rand(1,2));
x0 = [0,0];
for v = vvec,
fun = #(x) root(v, x);
options = optimset('MaxFunEvals',100000,'MaxIter', 10000 );
x = fsolve(fun, x0, options);
end
with function definition:
function D = root(v, x)
D(1) = x(1) + x(2) + v - 2;
D(2) = x(1) - x(2) + v - 1.8;
end

The Fastest Method of Solving System of Non-linear Equations in MATLAB

Assume we have three equations:
eq1 = x1 + (x1 - x2) * t - X == 0;
eq2 = z1 + (z1 - z2) * t - Z == 0;
eq3 = ((X-x1)/a)^2 + ((Z-z1)/b)^2 - 1 == 0;
while six of known variables are:
a = 42 ;
b = 12 ;
x1 = 316190;
z1 = 234070;
x2 = 316190;
z2 = 234070;
So we are looking for three unknown variables that are:
X , Z and t
I wrote two method to solve it. But, since I need to run these code for 5.7 million data, it become really slow.
Method one (using "solve"):
tic
S = solve( eq1 , eq2 , eq3 , X , Z , t ,...
'ReturnConditions', true, 'Real', true);
toc
X = double(S.X(1))
Z = double(S.Z(1))
t = double(S.t(1))
results of method one:
X = 316190;
Z = 234060;
t = -2.9280;
Elapsed time is 0.770429 seconds.
Method two (using "fsolve"):
coeffs = [a,b,x1,x2,z1,z2]; % Known parameters
x0 = [ x2 ; z2 ; 1 ].'; % Initial values for iterations
f_d = #(x0) myfunc(x0,coeffs); % f_d considers x0 as variables
options = optimoptions('fsolve','Display','none');
tic
M = fsolve(f_d,x0,options);
toc
results of method two:
X = 316190; % X = M(1)
Z = 234060; % Z = M(2)
t = -2.9280; % t = M(3)
Elapsed time is 0.014 seconds.
Although, the second method is faster, but it still needs to be improved. Please let me know if you have a better solution for that. Thanks
* extra information:
if you are interested to know what those 3 equations are, the first two are equations of a line in 2D and the third equation is an ellipse equation. I need to find the intersection of the line with the ellipse. Obviously, we have two points as result. But, let's forget about the second answer for simplicity.
My suggestion it's to use the second approce,which it's the recommended by matlab for nonlinear equation system.
Declare a M-function
function Y=mysistem(X)
%X(1) = X
%X(2) = t
%X(3) = Z
a = 42 ;
b = 12 ;
x1 = 316190;
z1 = 234070;
x2 = 316190;
z2 = 234070;
Y(1,1) = x1 + (x1 - x2) * X(2) - X(1);
Y(2,1) = z1 + (z1 - z2) * X(2) - X(3);
Y(3,1) = ((X-x1)/a)^2 + ((Z-z1)/b)^2 - 1;
end
Then for solving use
x0 = [ x2 , z2 , 1 ];
M = fsolve(#mysistem,x0,options);
If you may want to reduce the default precision by changing StepTolerance (default 1e-6).
Also for more increare you may want to use the jacobian matrix for greater efficencies.
For more reference take a look in official documentation:
fsolve Nonlinear Equations with Analytic Jacobian
Basically giving the solver the Jacobian matrix of the system(and special options) you can increase method efficency.

Matlab to find proper coefficients [duplicate]

This question already has answers here:
MATLAB - Fit exponential curve WITHOUT toolbox
(2 answers)
Closed 7 years ago.
I have such x vectors and y vectors described in below :
x = [0 5 8 15 18 25 30 38 42 45 50];
y = [81.94 75.94 70.06 60.94 57.00 50.83 46.83 42.83 40.94 39.00 38.06];
with these values how can i find an coefficients of y = a*(b^x) ??
I've tried this code but it finds for y = a*e^(b*x)
clear, clc, close all, format compact, format long
% enter data
x = [0 5 8 15 18 25 30 38];
y = [81.94 75.94 70.06 60.94 57.00 50.83 46.83 42.83];
n = length(x);
y2 = log(y);
j = sum(x);
k = sum(y2);
l = sum(x.^2);
m = sum(y2.^2);
r2 = sum(x .* y2);
b = (n * r2 - k * j) / (n * l - j^2)
a = exp((k-b*j)/n)
y = a * exp(b * 35)
result_68 = log(68/a)/b
I know interpolation techniques but i couldn't implement it to my solutions...
Many thanks in advance!
Since y = a * b ^ x is the same as log(y) = log(a) + x log(b) you can do
>> y = y(:);
>> x = x(:);
>> logy = log(y);
>> beta = regress(logy, [ones(size(x)), x]);
>> loga = beta(1);
>> logb = beta(2);
>> a = exp(loga);
>> b = exp(logb);
so the values of a and b are
>> a, b
a =
78.8627588780382
b =
0.984328823937827
Plotting the fit
>> plot(x, y, '.', x, a * b .^ x, '-')
gives you this -
NB the regress function is from the statistics toolbox, but you can define a very simple version which does what you need
function beta = regress(y, x)
beta = (x' * x) \ (x' * y);
end
As an extension to the answer given by Chris Taylor, which provides the best linear fit in the logarithmic-transformed domain, you can find a better fit in the original domain by solving directly the non-linear problem with, for example, the Gauss-Newton algorithm
Using for example the solution given by Chris as a starting point:
x = [0 5 8 15 18 25 30 38 42 45 50];
y = [81.94 75.94 70.06 60.94 57.00 50.83 46.83 42.83 40.94 39.00 38.06];
regress = #(y, x) (x' * x) \ (x' * y);
y = y(:);
x = x(:);
logy = log(y);
beta = regress(logy, [ones(size(x)), x]);
loga = beta(1);
logb = beta(2);
a = exp(loga)
b = exp(logb)
error = sum((a*b.^x - y).^2)
Which gives:
>> a, b, error
a =
78.862758878038164
b =
0.984328823937827
error =
42.275290442577422
You can iterate a bit further to find a better solution
beta = [a; b];
iter = 20
for k = 1:iter
fi = beta(1)*beta(2).^x;
ri = y - fi;
J = [ beta(2).^x, beta(1)*beta(2).^(x-1).*x ]';
JJ = J * J';
Jr = J * ri;
delta = JJ \ Jr;
beta = beta + delta;
end
a = beta(1)
b = beta(2)
error = sum((a*b.^x - y).^2)
Giving:
>> a, b, error
a =
80.332725222265623
b =
0.983480686478288
error =
35.978195088265906
One more option is to use MATLAB's Curve Fitting Toolbox which lets you generate the fit interactively, and can also emit the MATLAB code needed to run the fit.

Not sure what this 'histogram code' is doing in MATLAB

I have following code that was given to me, but I am not sure at all as to what the logic here is. The idea, I believe, is that this will histogram/quantize my data. Here is the code:
The input:
x = 180.*rand(1,1000); %1000 points from 0 to 180 degrees.
binWidth = 20; %I want the binWidth to be 20 degrees.
The main function:
% -------------------------------------------------------------------------
% Compute the closest bin center x1 that is less than or equal to x
% -------------------------------------------------------------------------
function [x1, b1] = computeLowerHistBin(x, binWidth)
% Bin index
bin = floor(x./binWidth - 0.5);
% Bin center x1
x1 = binWidth * (bin + 0.5);
% add 2 to get to 1-based indexing
b1 = bin + 2;
end
Finally, the final 'quantized' data:
w = 1 - (x - x1)./binWidth
Here is what I do not get: I do not understand - at all - just why exactly x1 is computed the way it is, and also why/how w is computed the way it is. In fact, of all the things, w confuses me the most. I literally cannot understand the logic here, or what is really intended. Would appreciate a detailed elucidation of this logic. Thanks.
He is binning with lb <= x < up and splitting the interval [0,180] in [-10,10), [10, 30), [30,40) ..., [150,170), [170,190).
Suppose x = 180, then:
bin = floor(180/20-0.5) = floor(9-0.5) = floor(8.5) = 8;
while if x = 0:
bin = floor(`0/20-0.5) = floor(-0.5) = floor(-1) = -1;
which respectively translate into x1 = 20 * (8+0.5) = 170 and x1 = -10 which seems like what the function suggests lowerHistBin().
In the end, w simply measures how far the point x is from the corresponding lower bin x1. Notice that w is in (0,1], with w being 1 when x = x1 and approaching 0 when x -> x1+binWidth. So, if x say approaches 170, then w will approach 1 - (170-150)/20 = 0.