Any example for multi-objective optimization in Pyomo?
I am trying to minimize 4 Objectives (Non Linear) and I would like to use pyomo and ipopt. Have also access to Gurobi.
I want to see even very simple example where we try to optimize for two or more objective (one minimization and one maximization) for a list of decision variables (not just one dimension but maybe a vector).
Pyomo book that I have (https://link.springer.com/content/pdf/10.1007%2F978-3-319-58821-6.pdf) does not provide a signle clue.
With Pyomo you have to implement it yourself. I am doing it right now. The best method is the augmented epsilon-constraint method. It will always be efficient and always find the global pareto-optimum. Best example is here:
Effective implementation of the epsilon-constraint method in Multi-Objective Mathematical Programming problems, Mavrotas, G, 2009.
Edit: Here I programmed the example from the Paper above in pyomo:
It will first maximize for f1 then for f2. Then It'll apply the normal epsilon-constraint and plot the inefficient Pareto-front and then It'll apply the augmented epsilon-constraint, which finally is the method to go with!
from pyomo.environ import *
import matplotlib.pyplot as plt
# max f1 = X1 <br>
# max f2 = 3 X1 + 4 X2 <br>
# st X1 <= 20 <br>
# X2 <= 40 <br>
# 5 X1 + 4 X2 <= 200 <br>
model = ConcreteModel()
model.X1 = Var(within=NonNegativeReals)
model.X2 = Var(within=NonNegativeReals)
model.C1 = Constraint(expr = model.X1 <= 20)
model.C2 = Constraint(expr = model.X2 <= 40)
model.C3 = Constraint(expr = 5 * model.X1 + 4 * model.X2 <= 200)
model.f1 = Var()
model.f2 = Var()
model.C_f1 = Constraint(expr= model.f1 == model.X1)
model.C_f2 = Constraint(expr= model.f2 == 3 * model.X1 + 4 * model.X2)
model.O_f1 = Objective(expr= model.f1 , sense=maximize)
model.O_f2 = Objective(expr= model.f2 , sense=maximize)
model.O_f2.deactivate()
solver = SolverFactory('cplex')
solver.solve(model);
print( '( X1 , X2 ) = ( ' + str(value(model.X1)) + ' , ' + str(value(model.X2)) + ' )')
print( 'f1 = ' + str(value(model.f1)) )
print( 'f2 = ' + str(value(model.f2)) )
f2_min = value(model.f2)
# ## max f2
model.O_f2.activate()
model.O_f1.deactivate()
solver = SolverFactory('cplex')
solver.solve(model);
print( '( X1 , X2 ) = ( ' + str(value(model.X1)) + ' , ' + str(value(model.X2)) + ' )')
print( 'f1 = ' + str(value(model.f1)) )
print( 'f2 = ' + str(value(model.f2)) )
f2_max = value(model.f2)
# ## apply normal $\epsilon$-Constraint
model.O_f1.activate()
model.O_f2.deactivate()
model.e = Param(initialize=0, mutable=True)
model.C_epsilon = Constraint(expr = model.f2 == model.e)
solver.solve(model);
print('Each iteration will keep f2 lower than some values between f2_min and f2_max, so [' + str(f2_min) + ', ' + str(f2_max) + ']')
n = 4
step = int((f2_max - f2_min) / n)
steps = list(range(int(f2_min),int(f2_max),step)) + [f2_max]
x1_l = []
x2_l = []
for i in steps:
model.e = i
solver.solve(model);
x1_l.append(value(model.X1))
x2_l.append(value(model.X2))
plt.plot(x1_l,x2_l,'o-.');
plt.title('inefficient Pareto-front');
plt.grid(True);
# ## apply augmented $\epsilon$-Constraint
# max f2 + delta*epsilon <br>
# s.t. f2 - s = e
model.del_component(model.O_f1)
model.del_component(model.O_f2)
model.del_component(model.C_epsilon)
model.delta = Param(initialize=0.00001)
model.s = Var(within=NonNegativeReals)
model.O_f1 = Objective(expr = model.f1 + model.delta * model.s, sense=maximize)
model.C_e = Constraint(expr = model.f2 - model.s == model.e)
x1_l = []
x2_l = []
for i in range(160,190,6):
model.e = i
solver.solve(model);
x1_l.append(value(model.X1))
x2_l.append(value(model.X2))
plt.plot(x1_l,x2_l,'o-.');
plt.title('efficient Pareto-front');
plt.grid(True);
Disclaimer: I am the main developer of pymoo, a multi-objective optimization framework in Python.
You might want to consider other frameworks in Python that have a focus on multi-objective optimization. For instance, in pymoo the definition of the rather simple test problem mentioned above is more or less straightforward. You can find an implementation of it below. The results in the design and objectives space look as follows:
pymoo is well documented and provides a getting started guide that demonstrates defining your own optimization problem, obtaining a set of near-optimal solutions and analyzing it: https://pymoo.org/getting_started.html
The focus of the framework is anything related to multi-objective optimization including visualization and decision making.
import matplotlib.pyplot as plt
import numpy as np
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.model.problem import Problem
from pymoo.optimize import minimize
from pymoo.visualization.scatter import Scatter
class MyProblem(Problem):
def __init__(self):
"""
max f1 = X1 <br>
max f2 = 3 X1 + 4 X2 <br>
st X1 <= 20 <br>
X2 <= 40 <br>
5 X1 + 4 X2 <= 200 <br>
"""
super().__init__(n_var=2,
n_obj=2,
n_constr=1,
xl=np.array([0, 0]),
xu=np.array([20, 40]))
def _evaluate(self, x, out, *args, **kwargs):
# define both objectives
f1 = x[:, 0]
f2 = 3 * x[:, 0] + 4 * x[:, 1]
# we have to negate the objectives because by default we assume minimization
f1, f2 = -f1, -f2
# define the constraint as a less or equal to zero constraint
g1 = 5 * x[:, 0] + 4 * x[:, 1] - 200
out["F"] = np.column_stack([f1, f2])
out["G"] = g1
problem = MyProblem()
algorithm = NSGA2()
res = minimize(problem,
algorithm,
('n_gen', 200),
seed=1,
verbose=True)
print(res.X)
print(res.F)
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12, 6))
Scatter(fig=fig, ax=ax1, title="Design Space").add(res.X, color="blue").do()
Scatter(fig=fig, ax=ax2, title="Objective Space").add(res.F, color="red").do()
plt.show()
To my knowledge, while Pyomo supports the expression of models with multiple objectives, it does not yet have automatic model transformations to generate common multi-objective optimization formulations for you.
That said, you can still create these formulations yourself. Take a look at epsilon-constraint, 1-norm, and infinity norm for some ideas.
Related
I am trying to model the time evolution of a membrane based on the following code in MATLAB.
The basic outline is that the evolution is based on a differential equation
where j=0,1 and x^0 = x, x^1 = y and x^j(s_i) = x^j_i.
My code is the following.
import numpy as np
from matplotlib import pyplot as plt
R0 = 5 #radius
N = 360 #number of intervals
x0 = 2*np.pi*R0/(N/2) #resting membrane lengths
phi = np.linspace(0,2*np.pi, num=360, dtype=float)
R1 = R0 + 0.5*np.sin(20*phi)
X = R1*np.cos(phi)
Y = R1*np.sin(phi)
L = np.linspace(-1,358, num=360, dtype=int)
R = np.linspace(1,360, num=360,dtype=int) #right and left indexing vectors
R[359] = 0
X = R1*np.cos(phi)
Y = R1*np.sin(phi)
plt.plot(X,Y)
plt.axis("equal")
plt.show()
ds = 1/N
ds2 = ds**2
k = 1/10
w = 10**6
for i in range(0,20000):
lengths = np.sqrt( (X[R]-X)**2 + (Y[R]-Y)**2 )
Ex = k/ds2*(X[R] - 2*X + X[L] - x0*( (X[R]-X)/lengths - (X-X[L])/lengths[L]) )
Ey = k/ds2*(Y[R] - 2*Y + Y[L] - x0*( (Y[R]-Y)/lengths - (Y-Y[L])/lengths[L]) )
X = X + 1/w*Ex
Y = Y + 1/w*Ey
plt.plot(X,Y)
plt.axis("equal")
plt.show()
The model is supposed to devolve into a circular membrane, as below
but this is what mine does
Your definition of x0 is wrong.
In the Matlab code, it is equal to
x0 = 2*pi*R/N/2 # which is pi*R/N
while in your Python code it is
x0 = 2*np.pi*R0/(N/2) # which is 4*np.pi*R0/N
Correcting that, the end result is a circular shape, but with a different radius. I'm assuming that this is because of the reduced number of iterations (20000 instead of 1000000).
Edit:
As expected, using the correct number of iterations results in a plot similar to your expected one.
I am new to coding. Basic information about my problem is: r1 and r2 are two variables; u1 = dr1/dt, u2 = dr2/dt, and du1/dt = d^2r1/dt^2, du2/dt = d^2r2/dt^2. In Matlab code: r(1) implies r1, r(2) -> u1, r(3) -> r2, r(4) -> u2. rdot(2) is the expression for du1/dt and rdot(4) is the expression for du2/dt.
Ideally I should need just 4 initial conditions: r1(0), u1(0), r2(0), u2(0), which are 10d-6, 0, 5d-6, 0. But in my case du1/dt has dependence on du2/dt and vice versa. See last term of T1_1 and T2_1. And an ideal IC for both du1/dt and du2/dt is 0. But how do I implement this in my code?
My code is here.
function rdot = f(t, r)
P_stat = 1.01325d5;
P_v = 2.3388d3;
mu = 1.002d-3;
sigma = 72.8d-3;
c_s = 1481d0;
poly_exp = 1.4d0;
rho = 998.2071d0;
f_s = 20d3;
P_s = 1.01325d5;
r1_eq = 10d-6;
r2_eq = 4d-6;
d = 1d-3;
rdot(1) = r(2);
P1_bw = ( (P_stat - P_v + (2.d0*sigma/r1_eq))*((r1_eq/r(1))^(3.d0*poly_exp)) ) - (2.d0*sigma/r(1)) - (4.d0*mu*r(2)/r(1));
P1_ext = P_s*sin(2.d0*pi*f_s*(t + (r(1)/c_s)));
T2_1 = ((2.d0*r(3)*(r(4)^2.d0)) + ((r(3)^2.d0)*rdot(4)))/d;
T2_4 = (1.d0 - (r(2)/c_s))*r(1);
T2_5 = 1.5d0*(1.d0 - (r(2)/(3.d0*c_s)))*(r(2)^2.d0);
T2_6 = (1.d0 + (r(2)/c_s))*(P1_bw - P_stat + P_v - P1_ext)/rho;
T2_8 = ( (-3.d0*poly_exp*r(2)*(P_stat - P_v + (2.d0*sigma/r1_eq))*((r1_eq/r(1))^(3.d0*poly_exp)) ) + (2.d0*sigma*r(2)/r(1)) - (4.d0*mu*(- ((r(2)^2.d0)/r(1)))) )/r(1);
T2_9 = 2.d0*pi*f_s*P_s*(cos(2.d0*pi*f_s*(t + (r(1)/c_s))))*(1.d0 + (r(2)/c_s) );
T2_7 = (r(1)/(rho*c_s))*(T2_8 - T2_9);
rdot(2) = (T2_6 + T2_7 - T2_1 - T2_5)/(T2_4 + (4.d0*mu/(rho*c_s)));
rdot(3) = r(4);
P2_bw = ( (P_stat - P_v + (2.d0*sigma/r1_eq))*((r1_eq/r(3))^(3.d0*poly_exp)) ) - (2.d0*sigma/r(3)) - (4.d0*mu*r(4)/r(3));
P2_ext = P_s*sin(2.d0*pi*f_s*(t + (r(3)/c_s)));
T1_1 = ((2.d0*r(1)*(r(2)^2.d0)) + ((r(1)^2.d0)*rdot(2)))/d;
T1_4 = (1.d0 - (r(4)/c_s))*r(3);
T1_5 = 1.5d0*(1.d0 - (r(4)/(3.d0*c_s)))*(r(4)^2.d0);
T1_6 = (1.d0 + (r(4)/c_s))*(P2_bw - P_stat + P_v - P2_ext)/rho;
T1_8 = ( (-3.d0*poly_exp*r(4)*(P_stat - P_v + (2.d0*sigma/r1_eq))*((r1_eq/r(3))^(3.d0*poly_exp)) ) + (2.d0*sigma*r(4)/r(3)) - (4.d0*mu*(- ((r(4)^2.d0)/r(3)))) )/r(3);
T1_9 = 2.d0*pi*f_s*P_s*(cos(2.d0*pi*f_s*(t + (r(3)/c_s))))*(1.d0 + (r(4)/c_s) );
T1_7 = (r(3)/(rho*c_s))*(T1_8 - T1_9);
rdot(4) = (T1_6 + T1_7 - T1_1 - T1_5)/(T1_4 + (4.d0*mu/(rho*c_s)));
rdot = rdot';
clc;
clear all;
close all;
time_range = [0 3000d-6];
initial_conditions = [10d-6 0.d0 5d-6 0.d0];
[t, r] = ode45('bubble', time_range, initial_conditions);
plot(t, r(:, 1), t, r(:, 3));
For each degree of an ODE you'll need one initial condition. This is due to the amount of functions you are calculating. In your case you got a system of second order ODE's, which after being solved will provide a total of four functions: r(1), r(2), r(3), r(4)
Why do we even need initial conditions? Imagine you have a simple derivative:y'= y
We know that the function y = exp(x) * C solves this problem, but we need to adjust C in order to get "The one Solution". On the other hand side it makes no sense to give y' an initial condition, as it is fully defined, once y is defined. It doesn't matter whether the "foreign" variable appears in the form of a derivative or as a linear factor. It is independent from the amount of IC's.
I hope I could clarify it a bit. From my point of view your program should work that way, but I haven't had the chance to try it out.
I have fixed the problem, by introducing two new variables with initial conditions but then they are updated as the code runs. The new code is here with two variables: r2ddot and r1ddot;
function rdot = f(t, r)
P_stat = 1.01325d5;
P_v = 2.3388d3;
mu = 1.002d-3;
sigma = 72.8d-3;
c_s = 1481d0;
poly_exp = 1.4d0;
rho = 998.2071d0;
f_s = 20d3;
P_s = 1.01325d5;
r1_eq = 4d-6;
r2_eq = 5d-6;
d = 50*(r1_eq + r2_eq);
r2ddot = 0;
r1ddot = 0;
rdot(1) = r(2);
P2_bw = ( (P_stat - P_v + (2.d0*sigma/r2_eq))*((r2_eq/r(3))^(3.d0*poly_exp)) ) - (2.d0*sigma/r(3)) - (4.d0*mu*r(4)/r(3));
P2_ext = P_s*sin(2.d0*pi*f_s*(t + (r(3)/c_s)));
T1_1 = ((2.d0*r(1)*(r(2)^2.d0)) + ((r(1)^2.d0)*r1ddot))/d;
T1_4 = (1.d0 - (r(4)/c_s))*r(3);
T1_5 = 1.5d0*(1.d0 - (r(4)/(3.d0*c_s)))*(r(4)^2.d0);
T1_6 = (1.d0 + (r(4)/c_s))*(P2_bw - P_stat + P_v - P2_ext)/rho;
T1_8 = ( (-3.d0*poly_exp*r(4)*(P_stat - P_v + (2.d0*sigma/r2_eq))*((r2_eq/r(3))^(3.d0*poly_exp)) ) + (2.d0*sigma*r(4)/r(3)) - (4.d0*mu*(- ((r(4)^2.d0)/r(3)))) )/r(3);
T1_9 = 2.d0*pi*f_s*P_s*(cos(2.d0*pi*f_s*(t + (r(3)/c_s))))*(1.d0 + (r(4)/c_s) );
T1_7 = (r(3)/(rho*c_s))*(T1_8 - T1_9);
rdot(2) = (T1_6 + T1_7 - T1_1 - T1_5)/(T1_4 + (4.d0*mu/(rho*c_s))) ;
r2ddot = rdot(2);
rdot(3) = r(4);
P1_bw = ( (P_stat - P_v + (2.d0*sigma/r1_eq))*((r1_eq/r(1))^(3.d0*poly_exp)) ) - (2.d0*sigma/r(1)) - (4.d0*mu*r(2)/r(1));
P1_ext = P_s*sin(2.d0*pi*f_s*(t + (r(1)/c_s)));
T2_1 = ((2.d0*r(3)*(r(4)^2.d0)) + ((r(3)^2.d0)*r2ddot))/d;
T2_4 = (1.d0 - (r(2)/c_s))*r(1);
T2_5 = 1.5d0*(1.d0 - (r(2)/(3.d0*c_s)))*(r(2)^2.d0);
T2_6 = (1.d0 + (r(2)/c_s))*(P1_bw - P_stat + P_v - P1_ext)/rho;
T2_8 = ( (-3.d0*poly_exp*r(2)*(P_stat - P_v + (2.d0*sigma/r1_eq))*((r1_eq/r(1))^(3.d0*poly_exp)) ) + (2.d0*sigma*r(2)/r(1)) - (4.d0*mu*(- ((r(2)^2.d0)/r(1)))) )/r(1);
T2_9 = 2.d0*pi*f_s*P_s*(cos(2.d0*pi*f_s*(t + (r(1)/c_s))))*(1.d0 + (r(2)/c_s) );
T2_7 = (r(1)/(rho*c_s))*(T2_8 - T2_9);
rdot(4) = (T2_6 + T2_7 - T2_1 - T2_5)/(T2_4 + (4.d0*mu/(rho*c_s)));
r1ddot = rdot(4);
rdot = rdot';
clc;
clear all;
close all;
time_range = [0 1d-3];
initial_conditions = [4d-6 0.d0 5d-6 0.d0];
[t, r] = ode45('bubble', time_range, initial_conditions);
plot(t, r(:, 1), t, r(:, 3));
But I am not able to get the desired result. Actually I am trying to reproduce the results from the attached paper, see equation 7. enter link description here
The second set of equations can be obtained by interchanging indices 1 and 2.
Important note: There is a typo in the last term of equation 7 in Mettin's paper, that can be verified by using check on the dimensions of the various term. The correct last term can be seen from another paper https://journals.aps.org/pre/abstract/10.1103/PhysRevE.83.066313
See last term in eq.(1) below. Ignore the other extra terms in the equation. Important point is that the equation is of second order in Rj and the equation has a second order term in Ri at the end. And This is what I have tried to code.
Any help will be highly appreciated.
You have essentially the situation that
rdot(2) = a2 + b2*rdot(4)
rdot(4) = a4 + b4*rdot(2)
where a2,b2,a4,b4 contain all the other terms in your expressions.
This is a linear system that you have to solve to get the correct values to return. You can use a linear solver of Matlab or do in this simple 2-dimensional case do it by hand,
rdot(2) = a2 + b2*(a4 + b4*rdot(2)) ==> rdot(2) = (a2 + b2*a4) / (1 - b2*b4)
rdot(4) = a4 + b4*(a2 + b2*rdot(4)) ==> rdot(2) = (a4 + b4*a2) / (1 - b2*b4)
To apply this you need to split
T1_1 as T1_1a + T1_1b*rdot(4),
you can compute T1_1a and T1_1b from the given constant and state variables. Then where you have in the end
rdot(2)=(other + coeff*T1_1)/denom
you have to split into
a2 =(other + coeff*T1_1a) / denom and
b2 = coeff*T1_1b / denom
and do the same to the second part to get a4,b4 and then apply the solution formulas above.
I am wondering what is the fastest convex optimizer in Matlab or is there any way to speed up current solvers? I'm using CVX, but it's taking forever to solve the optimization problem I have.
The optimization I have is to solve
minimize norm(Ax-b, 2)
subject to
x >= 0
and x d <= delta
where the size of A and b are very large.
Is there any way that I can solve this by a least square solver and then transfer it to the constraint version to make it faster?
I'm not sure what x.d <= delta means, but I'll just assume it's supposed to be x <= delta.
You can solve this problem using the projected gradient method or an accelerated projected gradient method (which is just a slight modification of the projected gradient method, which "magically" converges much faster). Here is some python code that shows how to minimize .5|| Ax - b ||^2 subject to the constraint that 0 <= x <= delta using FISTA, which is an accelerated projected gradient method. More details about the projected gradient method and FISTA can be found for example in Boyd's manuscript on proximal algorithms.
import numpy as np
import matplotlib.pyplot as plt
def fista(gradf,proxg,evalf,evalg,x0,params):
# This code does FISTA with line search
maxIter = params['maxIter']
t = params['stepSize'] # Initial step size
showTrigger = params['showTrigger']
increaseFactor = 1.25
decreaseFactor = .5
costs = np.zeros((maxIter,1))
xkm1 = np.copy(x0)
vkm1 = np.copy(x0)
for k in np.arange(1,maxIter+1,dtype = np.double):
costs[k-1] = evalf(xkm1) + evalg(xkm1)
if k % showTrigger == 0:
print "Iteration: " + str(k) + " cost: " + str(costs[k-1])
t = increaseFactor*t
acceptFlag = False
while acceptFlag == False:
if k == 1:
theta = 1
else:
a = tkm1
b = t*(thetakm1**2)
c = -t*(thetakm1**2)
theta = (-b + np.sqrt(b**2 - 4*a*c))/(2*a)
y = (1 - theta)*xkm1 + theta*vkm1
(gradf_y,fy) = gradf(y)
x = proxg(y - t*gradf_y,t)
fx = evalf(x)
if fx <= fy + np.vdot(gradf_y,x - y) + (.5/t)*np.sum((x - y)**2):
acceptFlag = True
else:
t = decreaseFactor*t
tkm1 = t
thetakm1 = theta
vkm1 = xkm1 + (1/theta)*(x - xkm1)
xkm1 = x
return (xkm1,costs)
if __name__ == '__main__':
delta = 5.0
numRows = 300
numCols = 50
A = np.random.randn(numRows,numCols)
ATrans = np.transpose(A)
xTrue = delta*np.random.rand(numCols,1)
b = np.dot(A,xTrue)
noise = .1*np.random.randn(numRows,1)
b = b + noise
def evalf(x):
AxMinusb = np.dot(A, x) - b
val = .5 * np.sum(AxMinusb ** 2)
return val
def gradf(x):
AxMinusb = np.dot(A, x) - b
grad = np.dot(ATrans, AxMinusb)
val = .5 * np.sum(AxMinusb ** 2)
return (grad, val)
def evalg(x):
return 0.0
def proxg(x,t):
return np.maximum(np.minimum(x,delta),0.0)
x0 = np.zeros((numCols,1))
params = {'maxIter': 500, 'stepSize': 1.0, 'showTrigger': 5}
(x,costs) = fista(gradf,proxg,evalf,evalg,x0,params)
plt.figure()
plt.plot(x)
plt.plot(xTrue)
plt.figure()
plt.semilogy(costs)
Given that I have a model that can be expressed as:
y = a + b*st + c*d2
where st is a smoothed version of some data, and a, b and c are model coffieicients that are unknown. An iterative process should be used to find the best values for a, b, and c and also an additional value alpha, shown below.
Here, I show an example using some data that I have. I'll only show a small fraction of the data here to get an idea of what I have:
17.1003710350253 16.7250000000000 681.521316544969
17.0325989276234 18.0540000000000 676.656460644882
17.0113862864815 16.2460000000000 671.738125420192
16.8744356336601 15.1580000000000 666.767363772145
16.5537077980594 12.8830000000000 661.739644621949
16.0646524243248 10.4710000000000 656.656219934146
15.5904357723302 9.35000000000000 651.523986525985
15.2894427136087 12.4580000000000 646.344231349275
15.1181450512182 9.68700000000000 641.118300709434
15.0074128442766 10.4080000000000 635.847600747838
14.9330905954828 11.5330000000000 630.533597865332
14.8201069920058 10.6830000000000 625.177819082427
16.3126863409751 15.9610000000000 619.781852331734
16.2700386755872 16.3580000000000 614.347346678083
15.8072873786912 10.8300000000000 608.876012461843
15.3788908036751 7.55000000000000 603.369621360944
15.0694302370038 13.1960000000000 597.830006367160
14.6313314652840 8.36200000000000 592.259061672302
14.2479738025295 9.03000000000000 586.658742460043
13.8147156115234 5.29100000000000 581.031064599264
13.5384821473624 7.22100000000000 575.378104234926
13.3603543306796 8.22900000000000 569.701997272687
13.2469020140965 9.07300000000000 564.004938753678
13.2064193251406 12.0920000000000 558.289182116093
13.1513460035983 12.2040000000000 552.557038340513
12.8747853506079 4.46200000000000 546.810874976187
12.5948999131388 4.61200000000000 541.053115045791
12.3969691298003 6.83300000000000 535.286235826545
12.1145822760120 2.43800000000000 529.512767505944
11.9541188991626 2.46700000000000 523.735291710730
11.7457790927936 4.15000000000000 517.956439908176
11.5202981254529 4.47000000000000 512.178891679167
11.2824263926694 2.62100000000000 506.405372863054
11.0981930749608 2.50000000000000 500.638653574697
10.8686514170776 1.66300000000000 494.881546094641
10.7122053911554 1.68800000000000 489.136902633882
10.6255883267131 2.48800000000000 483.407612975178
10.4979083986908 4.65800000000000 477.696601993434
10.3598092538338 4.81700000000000 472.006827058220
10.1929490084608 2.46700000000000 466.341275322034
10.1367069580204 2.36700000000000 460.702960898512
10.0194072271384 4.87800000000000 455.094921935306
9.88627023967911 3.53700000000000 449.520217586971
9.69091601129389 0.417000000000000 443.981924893704
9.48684595125235 -0.567000000000000 438.483135572389
9.30742664359900 0.892000000000000 433.026952726910
9.18283037670750 1.50000000000000 427.616487485241
9.02385722622626 1.75800000000000 422.254855571341
8.90355705229410 2.46700000000000 416.945173820367
8.76138912769045 1.99200000000000 411.690556646207
8.61299614111510 0.463000000000000 406.494112470755
8.56293606861698 6.55000000000000 401.358940124780
8.47831879772002 4.65000000000000 396.288125230599
8.42736865902327 6.45000000000000 391.284736577104
8.26325535934842 -1.37900000000000 386.351822497948
8.14547793724500 1.37900000000000 381.492407263967
8.00075641792910 -1.03700000000000 376.709487501030
7.83932517791044 -1.66700000000000 372.006028644665
7.68389447250257 -4.12900000000000 367.384961442799
7.63402151555169 -2.57900000000000 362.849178517935
The results that follow probably won't be meaningful as the full data would be needed (but this is an example). Using this data I have tried to solve iteratively by
y = d(:,1);
d1 = d(:,2);
d2 = d(:,3);
alpha_o = linspace(0.01,1,10);
a = linspace(0.01,1,10);
b = linspace(0.01,1,10);
c = linspace(0.01,1,10);
defining different values for a, b, and c as well as another term alpha, which is used in the model, and am now going to find every possible combination of these parameters and see which combination provides the best fit to the data:
% every possible combination of values
xx = combvec(alpha_o,a,b,c);
% loop through each possible combination of values
for j = 1:size(xx,2);
alpha_o = xx(1,j);
a_o = xx(2,j);
b_o = xx(3,j);
c_o = xx(4,j);
st = d1(1);
for i = 2:length(d1);
st(i) = alpha_o.*d1(i) + (1-alpha_o).*st(i-1);
end
st = st(:);
y_pred = a_o + (b_o*st) + (c_o*d2);
mae(j) = nanmean(abs(y - y_pred));
end
I can then re-run the model using these optimum values:
[id1,id2] = min(mae);
alpha_opt = xx(:,id2);
st = d1(1);
for i = 2:length(d1);
st(i) = alpha_opt(1).*d1(i) + (1-alpha_opt(1)).*st(i-1);
end
st = st(:);
y_pred = alpha_opt(2) + (alpha_opt(3)*st) + (alpha_opt(4)*d2);
mae_final = nanmean(abs(y - y_pred));
However, to reach a final answer I would need to increase the number of initial guesses to more than 10 for each variable. This will take a long time to run. Thereofre, I am wondering if there is a better method for what I am trying to do here? Any advice is appreciated.
Here's some thoughts: If you could decrease the amount of computation within each for loop, you could possibly speed it up. One possible way is to look for common factors between each loop and move it outside for loop:
If you look at the iteration, you'll see
st(1) = d1(1)
st(2) = a * d1(2) + (1-a) * st(1) = a *d1(2) + (1-a)*d1(1)
st(3) = a * d1(3) + (1-a) * st(2) = a * d1(3) + a *(1-a)*d1(2) +(1-a)^2 * d1(1)
st(n) = a * d1(n) + a *(1-a)*d1(n-1) + a *(1-a)^2 * d1(n-2) + ... +(1-a)^(n-1)*d1(1)
Which means st can be calculated by multiplying these two matrices (here I use n=4 for example to illustrate the concept) and sum along the first dimension:
temp1 = [ 0 0 0 a ;
0 0 a a(1-a) ;
0 a a(1-a) a(1-a)^2 ;
1 (1-a) (1-a)^2 (1-a)^3 ;]
temp2 = [ 0 0 0 d1(4) ;
0 0 d1(3) d1(3) ;
0 d1(2) d1(2) d1(2) ;
d1(1) d1(1) d1(1) d1(1) ;]
st = sum(temp1.*temp2,1)
Here's codes that utilize this concept: Computation has been moved out of the inner for loop and only assignment is left.
alpha_o = linspace(0.01,1,10);
xx = nchoosek(alpha_o, 4);
n = size(d1,1);
matrix_d1 = zeros(n, n);
d2 = d2'; % To make the dimension of d2 and st the same.
for ii = 1:n
matrix_d1(n-ii+1:n, ii) = d1(1:ii);
end
st = zeros(size(d1)'); % Pre-allocation of matrix will improve speed.
mae = zeros(1,size(xx,1));
matrix_alpha = zeros(n, n);
for j = 1 : size(xx,1)
alpha_o = xx(j,1);
temp = (power(1-alpha_o, [0:n-1])*alpha_o)';
matrix_alpha(n,:) = power(1-alpha_o, [0:n-1]);
for ii = 2:n
matrix_alpha(n-ii+1:n-1, ii) = temp(1:ii-1);
end
st = sum(matrix_d1.*matrix_alpha, 1);
y_pred = xx(j,2) + xx(j,3)*st + xx(j,4)*d2;
mae(j) = nanmean(abs(y - y_pred));
end
Then :
idx = find(min(mae));
alpha_opt = xx(idx,:);
st = zeros(size(d1)');
temp = (power(1-alpha_opt(1), [0:n-1])*alpha_opt(1))';
matrix_alpha = zeros(n, n);
matrix_alpha(n,:) = power(1-alpha_opt(1), [0:n-1]);;
for ii = 2:n
matrix_alpha(n-ii+1:n-1, ii) = temp(1:ii-1);
end
st = sum(matrix_d1.*matrix_alpha, 1);
y_pred = alpha_opt(2) + (alpha_opt(3)*st) + (alpha_opt(4)*d2);
mae_final = nanmean(abs(y - y_pred));
Let me know if this helps !
Given a system of differential equations such as:
dy/dt = f(t)
dx/dt = g(t)
A solution can be found using dsolve, such as:
dsolve(diff(y) == f(t), diff(x) == g(t), y(0) == 1, x(0) == 1);
But what about a system where all the variables depend on each other:
dy/dt = f(y,z)
dx/dt = g(x,y)
dz/dt = h(z,x)
When approached in the same way, with initial conditions, for a system which does have a solution, I cannot find a solution.
I know the system I have tried can produce solutions as I have used a stochastic/deterministic simulator - think there's probably some strange syntax to use.
I'm specifically looking for the solution where the derivatives are all zero, if that helps.
EDIT:
Here is an example:
PX/dt = (k_tl*(a0_tr + ((a_tr*KM^n)/((KM^n) + (PZ^n))))/kd_mRNA)-kd_prot*PX;
PY/dt = (k_tl*(a0_tr + ((a_tr*KM^n)/((KM^n) + (PX^n))))/kd_mRNA)-kd_prot*PY;
PZ/dt = (k_tl*(a0_tr + ((a_tr*KM^n)/((KM^n) + (PY^n))))/kd_mRNA)-kd_prot*PZ;
with the coefficients:
eff = 20;
KM = 40;
tau_mRNA=2.0;
tau_prot=10;
ps_a=0.5;
ps_0=5.0E-4;
t_ave = tau_mRNA/log(2);
k_tl=eff/t_ave;
a_tr=(ps_a-ps_0)*60;
a0_tr=ps_0*60;
kd_mRNA = log(2)/tau_mRNA;
kd_prot = log(2)/tau_prot;
beta = tau_mRNA/tau_prot;
alpha = a_tr*eff*tau_prot/(log(2)*KM);
alpha0 = a0_tr*eff*tau_prot/(log(2)*KM);
n=2;
And the initial conditions:
PX0 = 20;
PY0 = 0;
PZ0 = 0;
This produces a response:
This clearly has a steady state solution (all derivatives 0).
In MATLAB I have tried:
%%
syms PX(t) PY(t) PZ(t);
z = dsolve(diff(PX) == (k_tl*(a0_tr + ((a_tr*KM^n)/((KM^n) + (PZ^n))))/kd_mRNA)-kd_prot*PX, diff(PY) == (k_tl*(a0_tr + ((a_tr*KM^n)/((KM^n) + (PX^n))))/kd_mRNA)-kd_prot*PY, diff(PZ)==(k_tl*(a0_tr + ((a_tr*KM^n)/((KM^n) + (PY^n))))/kd_mRNA)-kd_prot*PZ,PX(0)==20)
and:
%%
eq1 = (k_tl*(a0_tr + ((a_tr*KM^n)/((KM^n) + (PZ^n))))/kd_mRNA)-kd_prot*PX;
eq2 = (k_tl*(a0_tr + ((a_tr*KM^n)/((KM^n) + (PX^n))))/kd_mRNA)-kd_prot*PY;
eq3 = (k_tl*(a0_tr + ((a_tr*KM^n)/((KM^n) + (PY^n))))/kd_mRNA)-kd_prot*PZ;
dsolve(diff(PX)==eq1,PX(0)==20,diff(PY)==eq2,PY(0)==0,diff(PZ)==eq3,PZ(0)==0)
Both produce no errors but return an empty sym.
Your numeric solution appears to have an oscillatory component. The "steady state" may be a zero amplitude limit cycle, which is a non-trivial solution. You definitely shouldn't expect a system like this to have an easy-to-find analytic solution. The cyclic relations between your three variables also doesn't help. For what it's worth, Mathematica 10's DSolve also is unable to find a solution.
Though it won't get you to a solution, the way you're using symbolic math is less than optimal. When you use something like log(2) in a symbolic math equation, 2 should be converted to a symbolic value first. For example, sym(log(2)) yields the approximation 6243314768165359/9007199254740992, whereas log(sym(2)) returns the exact log(2). This latter form is much more likely to lead to solutions if they exist. Here's a modified version of your code, which unfortunately still returns "Warning: Explicit solution could not be found":
eff = 20;
KM = 40;
tau_mRNA=2;
tau_prot=10;
ps_a=1/sym(2);
ps_0=5/sym(10000);
ln2 = log(sym(2));
t_ave = tau_mRNA/ln2;
k_tl=eff/t_ave;
a_tr=(ps_a-ps_0)*60;
a0_tr=ps_0*60;
kd_mRNA = ln2/tau_mRNA;
kd_prot = ln2/tau_prot;
beta = tau_mRNA/tau_prot;
alpha = a_tr*eff*tau_prot/(ln2*KM);
alpha0 = a0_tr*eff*tau_prot/(ln2*KM);
n=2;
PX0 = 20;
PY0 = 0;
PZ0 = 0;
syms PX(t) PY(t) PZ(t);
eq1 = (k_tl*(a0_tr + a_tr*KM^n/(KM^n + PZ^n))/kd_mRNA)-kd_prot*PX;
eq2 = (k_tl*(a0_tr + a_tr*KM^n/(KM^n + PX^n))/kd_mRNA)-kd_prot*PY;
eq3 = (k_tl*(a0_tr + a_tr*KM^n/(KM^n + PY^n))/kd_mRNA)-kd_prot*PZ;
s = dsolve(diff(PX,t)==eq1,diff(PY,t)==eq2,diff(PZ,t)==eq3,PX(0)==20,PY(0)==0,PZ(0)==0)