Constraints on dependent variable Matlab - matlab

I am working on a Simulation-Optimization routine in Matlab. Myprogram solves a set of Differential Algebraic Equations (DAEs) which depend on a set of design variables x and computes a cost function (objective function). The value of the objective function is passed to fmincon, which decides how to update x, so constraints are fulfilled.
I was wondering if there is a way to recursively impose bounds not only to x, but to internal variables. I have, for instance, a physical limitation on the area of a piece of equipment, but this value does not belong to x, (it is a dependent variable). I know that one can include penalty functions to the objective function to account for these variables, but I would want to know if there is a way to make this internal variables "visible" to the optimizer, so it runs according to bounds or inequality conditions on them.

Related

How to to setup an optimization in MATLAB which gives new set of variables for each iteration of my problem?

In MATLAB, as far as I know, I should pass the handle of a cost function to the optimization function in order to optimize my problem. In my situation, I do not want to create a cost function and pass its handle to the optimization. I would like to setup an optimization and ask the optimization object for best new set of variables at each iteration. I, myself, calculate the cost and pass its value to the optimization object. I mean the algorithm might be as follows
1- setup the optimization object (optimization method, optimization sense, ...).
2- introduce the variables and their bounds and constraints to the optimization object
3- ask the optimization object for a set of variables
4- implement the variables to the physical black box and obtain the outputs
5- calculate the cost function for the monitored output
6- if the cost function does not satisfy my goal, inform the optimization object about the calculated value of the cost function and go to step 3.
7- end
As far as I checked the functions of MATLAB optimization toolbox, all of them need the handle of the cost function.

ILP variable mapping in converting Matlab problem-based formulation to solver-based formulation

I am spinning up on ILPs using Matlab tools as the "vehicle" for this. They have both a conventional "solver" based (SB) formulation and a "problem" based (PB) formulation at a higher level of abstraction.
The difference between SB and PB is that, for PB, the user doesn't have to worry about what problem variables map to elements of the ILP column vector of variables. An optimization problem object accepts the optimization function and equality/inequality constraints in symbolic form, and the class methods handle the bookkeeping of defining a column vector of problem variables, the coefficients of the optimization function and the matrices & associated RHSs for equality & inequality constraints.
One can actually examine the SB counterpart to the PB formulation by using prob2struct to convert from PB to SB. Unfortunately, it isn't clear how prob2struct decides which PB variables map to which elements of the column vector of variables.
I tried searching through the optimization problem object in the PB formulation to see if it contains its own internal SB formulation details (regardless if it matches those of prob2struct), or at least the variable mapping. I couldn't find such details.
For prob2struct, is there a reliable rule for us to know which symbolic PB variables maps to which elements in the SB's column vector of variables?
Try the varindex function. It was introduced in R2019a.

Solving ODEs in NetLogo, Eulers vs R-K vs R solver

In my model each agent solves a system of ODEs at each tick. I have employed Eulers method (similar to the systems dynamics modeler in NetLogo) to solve these first order ODEs. However, for a stable solution, I am forced to use a very small time step (dt), which means the simulation proceeds very slowly with this method. I´m curious if anyone has advice on a method to solve the ODEs more quickly? I am considering implementing Runge-Kutta (with a larger time step?) as was done here (http://academic.evergreen.edu/m/mcavityd/netlogo/Bouncing_Ball.html). I would also consider using the R extension and using an ODE solver in R. But again, the ODEs are solved by each agent, so I don´t know if this is an efficient method.
I´m hoping someone has a feel for the performance of these methods and could offer some advice. If not, I will try to share what I find out.
In general your idea is correct. For a method of order p to reach a global error level tol over an integration interval of length T you will need a step size in the magnitude range
h=pow(tol/T,1.0/p).
However, not only the discretization error accumulates over the N=T/h steps, but also the floating point error. This gives a lower bound for useful step sizes of magnitude h=pow(T*mu,1.0/(p+1)).
Example: For T=1, mu=1e-15 and tol=1e-6
the Euler method of order 1 would need a step size of about h=1e-6 and thus N=1e+6 steps and function evaluations. The range of step sizes where reasonable results can be expected is bounded below by h=3e-8.
the improved Euler or Heun method has order 2, which implies a step size 1e-3, N=1000 steps and 2N=2000 function evaluations, the lower bound for useful step sizes is 1e-3.
the classical Runge-Kutta method has order 4, which gives a required step size of about h=3e-2 with about N=30 steps and 4N=120 function evaluations. The lower bound is 1e-3.
So there is a significant gain to be had by using higher order methods. At the same time the range where step size reduction results in a lower global error also gets significantly narrower for increasing order. But at the same time the achievable accuracy increases. So one has to knowingly care when the point is reached to leave well enough alone.
The implementation of RK4 in the ball example, as in general for the numerical integration of ODE, is for an ODE system x'=f(t,x), where x is the, possibly very large, state vector
A second order ODE (system) is transformed to a first order system by making the velocities members of the state vector. x''=a(x,x') gets transformed to [x',v']=[v, a(x,v)]. The big vector of the agent system is then composed of the collection of the pairs [x,v] or, if desired, as the concatenation of the collection of all x components and the collection of all v components.
In an agent based system it is reasonable to store the components of the state vector belonging to the agent as internal variables of the agent. Then the vector operations are performed by iterating over the agent collection and computing the operation tailored to the internal variables.
Taking into consideration that in the LOGO language there are no explicit parameters for function calls, the evaluation of dotx = f(t,x) needs to first fix the correct values of t and x before calling the function evaluation of f
save t0=t, x0=x
evaluate k1 = f_of_t_x
set t=t0+h/2, x=x0+h/2*k1
evaluate k2=f_of_t_x
set x=x0+h/2*k2
evaluate k3=f_of_t_x
set t=t+h, x=x0+h*k3
evaluate k4=f_of_t_x
set x=x0+h/6*(k1+2*(k2+k3)+k4)

Matlab optimization - Variable bounds have "donut hole"

I'm trying to solve a problem using Matlab's genetic algorithm and fmincon functions where the variables' values do not have single upper and lower bounds. Instead, the variables should be allowed to take a value of x=0 or be lb<=x<=ub. This is a turbine allocation problem, where the turbine can either be turned off (x=0) or be within the lower and upper cavitation limits (lb and ub). Of course I can trick the problem by creating a constraint which will violate for values in between 0 and lb, but I'm finding that the problem is having a hard time converging like this. Is there an easier way to do this, which will trim down the search space?
If the number of variables is small enough (say, like 10 or 15 or less) then you can try every subset of variables that are set to be non-zero, and see which subset gives you the optimal value. If you can't make assumptions about the structure of your optimization problem (e.g. you have penalties for non-zero variables but your main objective function is "exotic"), this is essentially the best that you can do. If you are willing to settle for an approximate solution, you can add a so-called "L1" penalty to your objective function which is the sum of a constant times the absolute values of the variables. This will encourage some variables to be zero, and if your main objective function is convex then the resulting objective function will be convex because negative absolute value is convex. It's much easier to optimize convex functions (taking the minimum) because strictly convex functions always have a global minimum that you can reach using any number of optimization routines (including the ones that are implemented in matlab.)

MATLAB: Global Optimization where one parameters has to be an odd integer

I want to find the global minimum of a function f that takes two parameters a and b. While a is continuous, b has to be an odd integer. How can I approach this problem, given MATLAB's built in functions and those that come with the Optimization and Global Optimization Toolboxes?