Optimization of matrix on matlab using fmincon - matlab

I have a 30x30 matrix as a base matrix (OD_b1), I also have two base vectors (bg and Ag). My aim is to optimize a matrix (X) who's dimensions are 30X30 such that:
1) the squared difference between vector (bg) and vector of sum of all the columns is minimized.
2)the squared difference between vector (Ag) and vector of sum of all rows is minimized.
3)the squared difference between the elements of matrix (X) and matrix (OD_b1) is minimized.
The mathematical form of the equation is as follows:
I have tried this:
fun=#(X)transpose(bg-sum(X,2))*(bg-sum(X,2))+ (Ag-sum(X,1))*transpose(Ag-sum(X,1))+sumsqr(X_b-X);
[val,X]=fmincon(fun,OD_b1,AA,BB,Aeq,beq,LB,UB)
I don't get errors but it seems like it's stuck.
Is it because I have too many variables or is there another reason?
Thanks in advance

This is a simple, unconstrained least squares problem and hence has a simple solution that can be expressed as the solution to a linear system.
I will show you (1) the precise and efficient way to solve this and (2) how to solve with fmincon.
The precise, efficient solution:
Problem setup
Just so we're on the same page, I initialize the variables as follows:
n = 30;
Ag = randn(n, 1); % observe the dimensions
X_b = randn(n, n);
bg = randn(n, 1);
The code:
A1 = kron(ones(1,n), eye(n));
A2 = kron(eye(n), ones(1,n));
A = (A1'*A1 + A2'*A2 + eye(n^2));
b = A1'*bg + A2'*Ag + X_b(:);
x = A \ b; % solves A*x = b
Xstar = reshape(x, n, n);
Why it works:
I first reformulated your problem so the objective is a vector x, not a matrix X. Observe that z = bg - sum(X,2) is equivalent to:
x = X(:) % vectorize X
A1 = kron(ones(1,n), eye(n)); % creates a special matrix that sums up
% stuff appropriately
z = A1*x;
Similarly, A2 is setup so that A2*x is equivalent to Ag'-sum(X,1). Your problem is then equivalent to:
minimize (over x) (bg - A1*x)'*(bg - A1*x) + (Ag - A2*x)'*(Ag - A2*x) + (y - x)'*(y-x) where y = Xb(:). That is, y is a vectorized version of Xb.
This problem is convex and the first order condition is a necessary and sufficient condition for the optimum. Take the derivative with respect to x and that equation will define your solution! Sample example math for almost equivalent (but slightly simpler problem is below):
minimize(over x) (b - A*x)'*(b - A*x) + (y - x)' * (y - x)
rewriting the objective:
b'b- b'Ax - x'A'b + x'A'Ax +y'y - 2y'x+x'x
Is equivalent to:
minimize(over x) (-2 b'A - 2y'*I) x + x' ( A'A + I) * x
the first order condition is:
(A'A+I+(A'A+I)')x -2A'b-2I'y = 0
(A'A+I) x = A'b+I'y
Your problem is essentially the same. It has the first order condition:
(A1'*A1 + A2'*A2 + I)*x = A1'*bg + A2'*Ag + y
How to solve with fmincon
You can do the following:
f = #(X) transpose(bg-sum(X,2))*(bg-sum(X,2)) + (Ag'-sum(X,1))*transpose(Ag'-sum(X,1))+sum(sum((X_b-X).^2));
o = optimoptions('fmincon');%MaxFunEvals',30000);
o.MaxFunEvals = 30000;
Xstar2 = fmincon(f,zeros(n,n),[],[],[],[],[],[],[],o);
You can then check the answers are about the same with:
normdif = norm(Xstar - Xstar2)
And you can see that gap is small, but that the linear algebra based solution is somewhat more precise:
gap = f(Xstar2) - f(Xstar)
If the fmincon approach hangs, try it with a smaller n just to gain confidence that my linear algebra based solution is more precise, way way faster etc... n = 30 is solving a 30^2 = 900 variable optimization problem: not easy. With the linear algebra approach, you can go up to n = 100 (i.e. 10000 variable problem) or even larger.

I would probably solve this as a QP using quadprog using the following reformulation (keeping the objective as simple as possible to make the problem "less nonlinear"):
min sum(i,v(i)^2)+sum(i,w(i)^2)+sum((i,j),z(i,j)^2)
v = bg - sum(c,x)
w = ag - sum(r,x)
Z = xbase-x
The QP solver is more precise (no gradients using finite differences). This approach also allows you to add additional bounds and linear equality and inequality constraints.
The other suggestion to form the first order conditions explicitly is also a good one: it also has no issue with imprecise gradients (the first order conditions are linear). I usually prefer a quadratic model because of its flexibility.

Related

Using MATLAB plots to find linear equation constants

Finding m and c for an equation y = mx + c, with the help of math and plots.
y is data_model_1, x is time.
Avoid other MATLAB functions like fitlm as it defeats the purpose.
I am having trouble finding the constants m and c. I am trying to find both m and c by limiting them to a range (based on smart guess) and I need to deduce the m and c values based on the mean error range. The point where mean error range is closest to 0 should be my m and c values.
load(file)
figure
plot(time,data_model_1,'bo')
hold on
for a = 0.11:0.01:0.13
c = -13:0.1:-10
data_a = a * time + c ;
plot(time,data_a,'r');
end
figure
hold on
for a = 0.11:0.01:0.13
c = -13:0.1:-10
data_a = a * time + c ;
mean_range = mean(abs(data_a - data_model_1));
plot(a,mean_range,'b.')
end
A quick & dirty approach
You can quickly get m and c using fminsearch(). In the first example below, the error function is the sum of squared error (SSE). The second example uses the sum of absolute error. The key here is ensuring the error function is convex.
Note that c = Beta(1) and m = Beta(2).
Reproducible example (MATLAB code[1]):
% Generate some example data
N = 50;
X = 2 + 13*random(makedist('Beta',.7,.8),N,1);
Y = 5 + 1.5.*X + randn(N,1);
% Example 1
SSEh =#(Beta) sum((Y - (Beta(1) + (Beta(2).*X))).^2);
Beta0 = [0.5 0.5]; % Initial Guess
[Beta SSE] = fminsearch(SSEh,Beta0)
% Example 2
SAEh =#(Beta) sum(abs(Y-(Beta(1) + Beta(2).*X)));
[Beta SumAbsErr] = fminsearch(SAEh,Beta0)
This is a quick & dirty approach that can work for many applications.
#Wolfie's comment directs you to the analytical approach to solve a system of linear equations with the \ operator or mldivide(). This is the more correct approach (though it will get a similar answer). One caveat is this approach gets the SSE answer.
[1] Tested with MATLAB R2018a

How does one compute a single finite differences in Matlab efficiently?

I wanted to compute a finite difference with respect to the change of the function in Matlab. In other words
f(x+e_i) - f(x)
is what I want to compute. Note that its very similar to the first order numerical partial differentiation (forward differentiation in this case) :
(f(x+e_i) - f(x)) / (e_i)
Currently I am using for loops to compute it but it seems that Matlab is much slower than I thought. I am doing it as follows:
function [ dU ] = numerical_gradient(W,f,eps)
%compute gradient or finite difference update numerically
[D1, D2] = size(W);
dU = zeros(D1, D2);
for d1=1:D1
for d2=1:D2
e = zeros([D1,D2]);
e(d1,d2) = eps;
f_e1 = f(W+e);
f_e2 = f(W-e);
%numerical_derivative = (f_e1 - f_e2)/(2*eps);
%dU(d1,d2) = numerical_derivative
numerical_difference = f_e1 - f_e2;
dU(d1,d2) = numerical_difference;
end
end
it seems that its really difficult to vectorize the above code because for numerical differences follow the definition of the gradient and partial derivatives which is:
df_dW = [ ..., df_dWi, ...]
where df_dWi assumes the other coordinates are fixed and it only worries about the change of the variable Wi. Thus, I can't just change all the coordinates at once.
Is there a better way to do this? My intuition tells me that the best way to do this is to implement this not in matlab but in some other language, say C and then have matlab call that library. Is that true? Does it mean that the best solution is some Matlab library that does this for me?
I did see:
https://www.mathworks.com/matlabcentral/answers/332414-what-is-the-quickest-way-to-find-a-gradient-or-finite-difference-in-matlab-of-a-real-function-in-hig
but unfortunately, it computes exact derivatives, which isn't what I am looking for. I am explicitly looking for differences or "bad approximation" to the gradient.
Since it seems this code is not easy to vectorize (in fact my intuition tells me its not possible to do so) my only other idea is to implement this finite difference function in C and then have C call the function. Is this a good idea? Anyone know how to do this?
I did try reading the following:
https://www.mathworks.com/help/matlab/matlab_external/standalone-example.html
but it was too difficult to understand for me because I have no idea what a mex file is, if I need to have a arrayProduct.c file as well as a mex.h file, if I also needed a matlab file, etc. If there just existed a way to simply download a working example with all the functions they suggest there and some instructions to compile it, then it would be super helpful. But just reading the hmtl/article like that its impossible for me to infer what they want me to do.
For the sake of completness it seems reddit has some comments in its discussion of this:
https://www.reddit.com/r/matlab/comments/623m7i/how_does_one_compute_a_single_finite_differences/
Here is a more efficient doing so:
function [ vNumericalGrad ] = CalcNumericalGradient( hInputFunc, vInputPoint, epsVal )
numElmnts = size(vInputPoint, 1);
vNumericalGrad = zeros([numElmnts, 1]);
refVal = hInputFunc(vInputPoint);
for ii = 1:numElmnts
% Set the perturbation vector
refInVal = vInputPoint(ii);
vInputPoint(ii) = refInVal + epsVal;
% Compute Numerical Gradient
vNumericalGrad(ii) = (hInputFunc(vInputPoint) - refVal) / epsVal;
% Reset the perturbation vector
vInputPoint(ii) = refInVal;
end
end
This code allocate less memory.
The above code performance will be totally controlled by the speed of the hInputFunction.
The small tricks compared to original code are:
No memory reallocation of e each iteration.
Instead of addition of vectors W + e there are 2 assignments to the array.
Decreasing the calls to hInputFunction() by half by defining the reference value outside the loop (This only works for Forward / Backward difference).
Probably this will be very close to C code unless you can code in C more efficiently the function which computes the value (hInputFunction).
A full implementation can be found in StackOverflow Q44984132 Repository (It was Posted in StackOverflow Q44984132).
See CalcFunGrad( vX, hObjFun, difMode, epsVal ).
A way better approach (numerically more stable, no issue of choosing the perturbation hyperparameter, accurate up to machine precision) is to use algorithmic/automatic differentiation. For this you need the Matlab Deep Learning Toolbox. Then you can use dlgradient to compute the gradient. Below you find the source code attached corresponding to your example.
Most importantly, you can examine the error and observe that the deviation of the automatic approach from the analytical solution is indeed machine precision, while for the finite difference approach (I choose second order central differences) the error is orders of magnitude higher. For 100 points and a range of $[-10, 10]$ this errors are somewhat tolerable, but if you play a bit with Rand_Max and n_points you observe that the errors become larger and larger.
Error of algorithmic / automatic diff. is: 1.4755528111219851e-14
Error of finite difference diff. is: 1.9999999999348703e-01 for perturbation 1.0000000000000001e-01
Error of finite difference diff. is: 1.9999999632850161e-03 for perturbation 1.0000000000000000e-02
Error of finite difference diff. is: 1.9999905867860374e-05 for perturbation 1.0000000000000000e-03
Error of finite difference diff. is: 1.9664569947425062e-07 for perturbation 1.0000000000000000e-04
Error of finite difference diff. is: 1.0537897883625319e-07 for perturbation 1.0000000000000001e-05
Error of finite difference diff. is: 1.5469326944467290e-06 for perturbation 9.9999999999999995e-07
Error of finite difference diff. is: 1.3322061696937969e-05 for perturbation 9.9999999999999995e-08
Error of finite difference diff. is: 1.7059535957436630e-04 for perturbation 1.0000000000000000e-08
Error of finite difference diff. is: 4.9702408787320664e-04 for perturbation 1.0000000000000001e-09
Source Code:
f2.m
function y = f2(x)
x1 = x(:, 1);
x2 = x(:, 2);
x3 = x(:, 3);
y = x1.^2 + 2*x2.^2 + 2*x3.^3 + 2*x1.*x2 + 2*x2.*x3;
f2_grad_analytic.m:
function grad = f2_grad_analytic(x)
x1 = x(:, 1);
x2 = x(:, 2);
x3 = x(:, 3);
grad(:, 1) = 2*x1 + 2*x2;
grad(:, 2) = 4*x2 + 2*x1 + 2 * x3;
grad(:, 3) = 6*x3.^2 + 2*x2;
f2_grad_AD.m:
function grad = f2_grad_AD(x)
x1 = x(:, 1);
x2 = x(:, 2);
x3 = x(:, 3);
y = x1.^2 + 2*x2.^2 + 2*x3.^3 + 2*x1.*x2 + 2*x2.*x3;
grad = dlgradient(y, x);
CalcNumericalGradient.m:
function NumericalGrad = CalcNumericalGradient(InputPoints, eps)
% (Central, second order accurate FD)
NumericalGrad = zeros(size(InputPoints) );
for i = 1:size(InputPoints, 2)
perturb = zeros(size(InputPoints));
perturb(:, i) = eps;
NumericalGrad(:, i) = (f2(InputPoints + perturb) - f2(InputPoints - perturb)) / (2 * eps);
end
main.m:
clear;
close all;
clc;
n_points = 100;
Rand_Max = 20;
x_test_FD = rand(n_points, 3) * Rand_Max - Rand_Max/2;
% Calculate analytical solution
grad_analytic = f2_grad_analytic(x_test_FD);
grad_AD = zeros(n_points, 3);
for i = 1:n_points
x_test_dl = dlarray(x_test_FD(i,:) );
grad_AD(i,:) = dlfeval(#f2_grad_AD, x_test_dl);
end
Err_AD = norm(grad_AD - grad_analytic);
fprintf("Error of algorithmic / automatic diff. is: %.16e\n", Err_AD);
eps_range = [1e-1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 1e-7, 1e-8, 1e-9];
for i = 1:length(eps_range)
eps = eps_range(i);
grad_FD = CalcNumericalGradient(x_test_FD, eps);
Err_FD = norm(grad_FD - grad_analytic);
fprintf("Error of finite difference diff. is: %.16e for perturbation %.16e\n", Err_FD, eps);
end

How can I get all solutions to this equation in MATLAB?

I would like to solve the following equation: tan(x) = 1/x
What I did:
syms x
eq = tan(x) == 1/x;
sol = solve(eq,x)
But this gives me only one numerical approximation of the solution. After that I read about the following:
[sol, params, conds] = solve(eq, x, 'ReturnConditions', true)
But this tells me that it can't find an explicit solution.
How can I find numerical solutions to this equation within some given range?
I've never liked using solvers "blindly", that is, without some sort of decent initial value selection scheme. In my experience, the values you will find when doing things blindly, will be without context as well. Meaning, you'll often miss solutions, think something is a solution while in reality the solver exploded, etc.
For this particular case, it is important to realize that fzero uses numerical derivatives to find increasingly better approximations. But, derivatives for f(x) = x · tan(x) - 1 get increasingly difficult to accurately compute for increasing x:
As you can see, the larger x becomes, the better f(x) approximates a vertical line; fzero will simply explode! Therefore it is imperative to get an estimate as closely to the solution as possible before even entering fzero.
So, here's a way to get good initial values.
Consider the function
f(x) = x · tan(x) - 1
Knowing that tan(x) has Taylor expansion:
tan(x) ≈ x + (1/3)·x³ + (2/15)·x⁵ + (7/315)·x⁷ + ...
we can use that to approximate the function f(x). Truncating after the second term, we can write:
f(x) ≈ x · (x + (1/3)·x³) - 1
Now, key to realize is that tan(x) repeats with period π. Therefore, it is most useful to consider the family of functions:
fₙ(x) ≈ x · ( (x - n·π) + (1/3)·(x - n·π)³) - 1
Evaluating this for a couple of multiples and collecting terms gives the following generalization:
f₀(x) = x⁴/3 - 0π·x³ + ( 0π² + 1)x² - (0π + (0π³)/3)·x - 1
f₁(x) = x⁴/3 - 1π·x³ + ( 1π² + 1)x² - (1π + (1π³)/3)·x - 1
f₂(x) = x⁴/3 - 2π·x³ + ( 4π² + 1)x² - (2π + (8π³)/3)·x - 1
f₃(x) = x⁴/3 - 3π·x³ + ( 9π² + 1)x² - (3π + (27π³)/3)·x - 1
f₄(x) = x⁴/3 - 4π·x³ + (16π² + 1)x² - (4π + (64π³)/3)·x - 1
⋮
fₙ(x) = x⁴/3 - nπ·x³ + (n²π² + 1)x² - (nπ + (n³π³)/3)·x - 1
Implementing all this in a simple MATLAB test:
% Replace this with the whole number of pi's you want to
% use as offset
n = 5;
% The coefficients of the approximating polynomial for this offset
C = #(npi) [1/3
-npi
npi^2 + 1
-npi - npi^3/3
-1];
% Find the real, positive polynomial roots
R = roots(C(n*pi));
R = R(imag(R)==0);
R = R(R > 0);
% And use these as initial values for fzero()
x_npi = fzero(#(x) x.*tan(x) - 1, R)
In a loop, this can produce the following table:
% Estimate (polynomial) Solution (fzero)
0.889543617524132 0.860333589019380 0·π
3.425836967935954 3.425618459481728 1·π
6.437309348195653 6.437298179171947 2·π
9.529336042900365 9.529334405361963 3·π
12.645287627956868 12.645287223856643
15.771285009691695 15.771284874815882
18.902410011613000 18.902409956860023
22.036496753426441 22.036496727938566 ⋮
25.172446339768143 25.172446326646664
28.309642861751708 28.309642854452012
31.447714641852869 31.447714637546234
34.586424217960058 34.586424215288922 11·π
As you can see, the approximant is basically equal to the solution. Corresponding plot:
To find a numerical solution to a function within some range, you can use fzero like this:
fun = #(x)x*tan(x)-1; % Multiplied by x so fzero has no issue evaluating it at x=0.
range = [0 pi/2];
sol = fzero(fun,range);
The above would return just one solution (0.8603). If you want additional solutions, you will have to call fzero more times. This can be done, for example, in a loop:
fun = #(x)tan(x)-1/x;
RANGE_START = 0;
RANGE_END = 3*pi;
RANGE_STEP = pi/2;
intervals = repelem(RANGE_START:RANGE_STEP:RANGE_END,2);
intervals = reshape(intervals(2:end-1),2,[]).';
sol = NaN(size(intervals,1),1);
for ind1 = 1:numel(sol)
sol(ind1) = fzero(fun, mean(intervals(ind1,:)));
end
sol = sol(~isnan(sol)); % In case you specified more intervals than solutions.
Which gives:
[0.86033358901938;
1.57079632679490; % Wrong
3.42561845948173;
4.71238898038469; % Wrong
6.43729817917195;
7.85398163397449] % Wrong
Note that:
The function is symmetric, and so are its roots. This means you can solve on just the positive interval (for example) and get the negative roots "for free".
Every other entry in sol is wrong because this is where we have asymptotic discontinuities (tan transitions from +Inf to -Inf), which is mistakenly recognized by MATLAB as a solution. So you can just ignore them (i.e. sol = sol(1:2:end);.
Multiply the equation by x and cos(x) to avoid any denominators that can have the value 0,
f(x)=x*sin(x)-cos(x)==0
Consider the normalized function
h(x)=(x*sin(x)-cos(x)) / (abs(x)+1)
For large x this will be increasingly close to sin(x) (or -sin(x) for large negative x). Indeed, plotting this this is already visually true, up to an amplitude factor, for x>pi.
For the first root in [0,pi/2] use the Taylor approximation at x=0 of second degree x^2-(1-0.5x^2)==0 to get x[0]=sqrt(2.0/3) as root approximation, for the higher ones take the sine roots x[n]=n*pi, n=1,2,3,... as initial approximations in the Newton iteration xnext = x - f(x)/f'(x) to get
n initial 1. Newton limit of Newton
0 0.816496580927726 0.863034004302817 0.860333589019380
1 3.141592653589793 3.336084918413964 3.425618459480901
2 6.283185307179586 6.403911810682199 6.437298179171945
3 9.424777960769379 9.512307014150883 9.529334405361963
4 12.566370614359172 12.635021895208379 12.645287223856643
5 15.707963267948966 15.764435036320542 15.771284874815882
6 18.849555921538759 18.897518573777646 18.902409956860023
7 21.991148575128552 22.032830614521892 22.036496727938566
8 25.132741228718345 25.169597069842926 25.172446326646664
9 28.274333882308138 28.307365162331923 28.309642854452012
10 31.415926535897931 31.445852385744583 31.447714637546234
11 34.557519189487721 34.584873343220551 34.586424215288922

weighted sum of matrices optimization

I'm beginner in optimization and welcome any guide in this field.
I have 15 matrices (i.e., Di of size (n*m)) and want to find best weights (i.e., wi) for weighted averaging them and make a better matrix that is more similar to one given matrix (i.e., Dt).
In fact my objective function is like it:
min [norm2(sum(wi * Di) - Dt) + norm2(W)]
for i=1 ... 15 s.t. sum(wi) = 1 , wi >= 0
How can I optimize this function in Matlab?
You are describing a simple Quadratic programming, that can be easily optimized using Matlab's quadprog.
Here how it goes:
You objective function is [norm2(sum(wi * Di) - Dt) + norm2(W)] subject to some linear constraints on w. Let's re-write it using some simplified notations. Let w be a 15-by-1 vector of unknowns. Let D be an n*m-by-15 matrix (each column is one of the Di matrices you have - written as a single column), and Dt is a n*m-by-1 vector (same as your Dt but written as a column vector). Now some linear algebra (using the fact that ||x||^2 = x'*x and that argmin x is equivalent to argmin x^2)
[norm2(sum(wi * Di) - Dt)^2 + norm2(W)^2] =
(D*w-Dt)'*(D*w-Dt) + w'*w =
w'D'Dw - 2w'D'Dt + Dt'Dt + w'w =
w'(D'D+I)w - 2w'D'Dt + Dt'Dt
The last term Dt'Dt is constant w.r.t w and therefore can be discarded during minimization, leaving you with
H = 2*(D'*D+eye(15));
f = -2*Dt'*D;
As for the constraint sum(w)=1, this can easily be defined by
Aeq = ones(1,15);
beq = 1;
And a lower bound lb = zeros(15,1) will ensure that all w_i>=0.
And the quadratic optimization:
w = quadprog( H, f, [], [], Aeq, beq, lb );
Should do the trick for you!

Matlab: finding coefficients of ODE system

I have all the data and an ODE system of three equations which has 9 unknown coefficients (a1, a2,..., a9).
dS/dt = a1*S+a2*D+a3*F
dD/dt = a4*S+a5*D+a6*F
dF/dt = a7*S+a8*D+a9*F
t = [1 2 3 4 5]
S = [17710 18445 20298 22369 24221]
D = [1357.33 1431.92 1448.94 1388.33 1468.95]
F = [104188 104792 112097 123492 140051]
How to find these coefficients (a1,..., a9) of an ODE using Matlab?
I can't spend too much time on this, but basically you need to use math to reduce the equation to something more meaningful:
your equation is of the order
dx/dt = A*x
ergo the solution is
x(t-t0) = exp(A*(t-t0)) * x(t0)
Thus
exp(A*(t-t0)) = x(t-t0) * Pseudo(x(t0))
Pseudo is the Moore-Penrose Pseudo-Inverse.
EDIT: Had a second look at my solution, and I didn't calculate the pseudo-inverse properly.
Basically, Pseudo(x(t0)) = x(t0)'*inv(x(t0)*x(t0)'), as x(t0) * Pseudo(x(t0)) equals the identity matrix
Now what you need to do is assume each time step (1 to 2, 2 to 3, 3 to 4) is an experiment (therefore t-t0=1), so the solution would be to:
1- Build your pseudo inverse:
xt = [S;D;F];
xt0 = xt(:,1:4);
xInv = xt0'*inv(xt0*xt0');
2- Get exponential result
xt1 = xt(:,2:5);
expA = xt1 * xInv;
3- Get the logarithm of the matrix:
A = logm(expA);
And since t-t0= 1, A is our solution.
And a simple proof to check
[t, y] = ode45(#(t,x) A*x,[1 5], xt(1:3,1));
plot (t,y,1:5, xt,'x')
You have a linear, coupled system of ordinary differential equations,
y' = Ay with y = [S(t); D(t); F(t)]
and you're trying to solve the inverse problem,
A = unknown
Interesting!
First line of attack
For given A, it is possible to solve such systems analytically (read the wiki for example).
The general solution for 3x3 design matrices A take the form
[S(t) D(t) T(t)].' = c1*V1*exp(r1*t) + c2*V2*exp(r2*t) + c3*V3*exp(r3*t)
with V and r the eigenvectors and eigenvalues of A, respectively, and c scalars that are usually determined by the problem's initial values.
Therefore, there would seem to be two steps to solve this problem:
Find vectors c*V and scalars r that best-fit your data
reconstruct A from the eigenvalues and eigenvectors.
However, going down this road is treaturous. You'd have to solve the non-linear least-squares problem for the sum-of-exponentials equation you have (using lsqcurvefit, for example). That would give you vectors c*V and scalars r. You'd then have to unravel the constants c somehow, and reconstruct the matrix A with V and r.
So, you'd have to solve for c (3 values), V (9 values), and r (3 values) to build the 3x3 matrix A (9 values) -- that seems too complicated to me.
Simpler method
There is a simpler way; use brute-force:
function test
% find
[A, fval] = fminsearch(#objFcn, 10*randn(3))
end
function objVal = objFcn(A)
% time span to be integrated over
tspan = [1 2 3 4 5];
% your desired data
S = [17710 18445 20298 22369 24221 ];
D = [1357.33 1431.92 1448.94 1388.33 1468.95 ];
F = [104188 104792 112097 123492 140051 ];
y_desired = [S; D; F].';
% solve the ODE
y0 = y_desired(1,:);
[~,y_real] = ode45(#(~,y) A*y, tspan, y0);
% objective function value: sum of squared quotients
objVal = sum((1 - y_real(:)./y_desired(:)).^2);
end
So far so good.
However, I tried both the complicated way and the brute-force approach above, but I found it very difficult to get the squared error anywhere near satisfyingly small.
The best solution I could find, after numerous attempts:
A =
1.216731997197118e+000 2.298119167536851e-001 -2.050312097914556e-001
-1.357306715497143e-001 -1.395572220988427e-001 2.607184719979916e-002
5.837808840775175e+000 -2.885686207763313e+001 -6.048741083713445e-001
fval =
3.868360951628554e-004
Which isn't bad at all :) But I would've liked a solution that was less difficult to find...