I trying to minimize function handle with respect to vector of parameters beta0. My function uses built-in mvncdf function which uses positive definite covariance matrix. This matrix is counted from part of vector of parameters. Also there is constraint for absolute value of some parameters to be less than one.
I set constraints to fmincon in two ways: upper and lower bounds to required values and use following nonlinear constraint:
function [c,ceq] = pos_def(beta0)
rho_12 = beta0(end-2,1);
rho_13 = beta0(end-1,1);
rho_23 = beta0(end,1);
sigma111=[1 rho_12 rho_13; rho_12 1 rho_23; rho_13 rho_23 1];
sigma110=[1 rho_12 -rho_13; rho_12 1 -rho_23; -rho_13 -rho_23 1];
sigma101=[1 -rho_12 rho_13; -rho_12 1 -rho_23; rho_13 -rho_23 1];
sigma100=[1 -rho_12 -rho_13; -rho_12 1 rho_23; -rho_13 rho_23 1];
eig111 = eig(sigma111);
eig110 = eig(sigma110);
eig101 = eig(sigma101);
eig100 = eig(sigma100);
c = vertcat(-eig111,-eig110,-eig101,-eig100);
As all matrices are square and symmentric by constraction, as proxy to positive difiniteness I use signs of eigenvalues.
The optimization problem looks like:
opts = optimset ('Display','iter','TolX',1e-15,'TolFun',1e-15,...
'Algorithm','interior-point','MaxIter',100000,'MaxFunEvals',1000000);
xc3_3=fmincon(model, beta,[],[],[],[],lb,ub,#pos_def, opts)
But during estimation fmincon aborts with error
Error using mvncdf (line 193) SIGMA must be a square, symmetric, positive definite matrix.
Under debuging mode I can see that after two iterations of evaluation Matlab tries to estimate beta0 which does not sutisfy my nonlinear constraints,
beta0 =
-46.9208
33.2916
-2.1797
-46.4251
3.8337
-0.3066
6.1213
-20.9480
-1.7760
-0.1807
1.3950
4.5348
-0.9838
0.2600
-6.9887
-24.6157
-0.0112
-0.9923
-0.9284
0.7664
0.3062
And constraint c < 0 does not satisfied:
c =
0.3646
-1.2998
-2.0648
0.3646
-1.2998
-2.0648
0.3646
-1.2998
-2.0648
0.3646
-1.2998
-2.0648
I do not understand why this optimization tool trying to find solution in the prohibited area and how to avoid this problem. Or how to set constrains on positive definiteness in the linear way.
The optimizer is just evaluating points to see if they are feasible directions to move in or not. Within your model you should tell it that a particular direction is not a good one. The pseudo-code would look something like
GetEigvalues
if (positive definite) then
Do what you really want to happen
else
Return a large number
end
or alternatively
try
Do what you really want to happen
catch
Return a large number
end
Related
How do I solve the following system of equations on MATLAB when one of the elements of the variable vector is a constant? Please do give the code if possible.
More generally, if the solution is to use symbolic math, how will I go about generating large number of variables, say 12 (rather than just two) even before solving them?
For example, create a number of symbolic variables using syms, and then make the system of equations like below.
syms a1 a2
A = [matrix]
x = [1;a1;a2];
y = [1;0;0];
eqs = A*x == y
sol = solve(eqs,[a1, a2])
sol.a1
sol.a2
In case you have a system with many variables, you could define all the symbols using syms, and solve it like above.
You could also perform a parameter optimization with fminsearch. First you have to define a cost function, in a separate function file, in this example called cost_fcn.m.
function J = cost_fcn(p)
% make sure p is a vector
p = reshape(p, [length(p) 1]);
% system of equations, can be linear or nonlinear
A = magic(12); % your system, I took some arbitrary matrix
sol = A*p;
% the goal of the system of equations to reach, can be zero, or some other
% vector
goal = zeros(12,1);
% calculate the error
error = goal - sol;
% Use a cost criterion, e.g. sum of squares
J = sum(error.^2);
end
This cost function will contain your system of equations, and goal solution. This can be any kind of system. The vector p will contain the parameters that are being estimated, which will be optimized, starting from some initial guess. To do the optimization, you will have to create a script:
% initial guess, can be zeros, or some other starting point
p0 = zeros(12,1);
% do the parameter optimization
p = fminsearch(#cost_fcn, p0);
In this case p0 is the initial guess, which you provide to fminsearch. Then the values of this initial guess will be incremented, until a minimum to the cost function is found. When the parameter optimization is finished, p will contain the parameters that will result in the lowest error for your system of equations. It is however possible that this is a local minimum, if there is no exact solution to the problem.
Your system is over-constrained, meaning you have more equations than unknown, so you can't solve it. What you can do is find a least square solution, using mldivide. First re-arrange your equations so that you have all the constant terms on the right side of the equal sign, then use mldivide:
>> A = [0.0297 -1.7796; 2.2749 0.0297; 0.0297 2.2749]
A =
0.029700 -1.779600
2.274900 0.029700
0.029700 2.274900
>> b = [1-2.2749; -0.0297; 1.7796]
b =
-1.274900
-0.029700
1.779600
>> A\b
ans =
-0.022191
0.757299
I have an integrated error expression E = int[ abs(x-p)^2 ]dx with limits x|0 to x|L. The variable p is a polynomial of the form 2*(a*sin(x)+b(a)*sin(2*x)+c(a)*sin(3*x)). In other words, both coefficients b and c are known expressions of a. An additional equation is given as dE/da = 0. If the upper limit L is defined, the system of equations is closed and I can solve for a, giving the three coefficients.
I managed to get an optimization routine to solve for a purely based on maximizing L. This is confirmed by setting optimize=0 in the code below. It gives the same solution as if I solved the problem analytically. Therefore, I know the equations to solve for the coefficent a are correct.
I know the example I presented can be solved with pencil and paper, but I'm trying to build an optimization function that is generalized for this type of problem (I have a lot to evaluate). Ideally, polynomial is given as an input argument to a function which then outputs xsol. Obviously, I need to get the optimization to work for the polynomial I presented here before I can worry about generalizations.
Anyway, I now need to further optimize the problem with some constraints. To start, L is chosen. This allows me to calculate a. Once a is know, the polynomial is a known function of x only i.e p(x). I need to then determine the largest INTERVAL from 0->x over which the following constraint is satisfied: |dp(x)/dx - 1| < tol. This gives me a measure of the performance of the polynomial with the coefficient a. The interval is what I call the "bandwidth". I would like to emphasis two things: 1) The "bandwidth" is NOT the same as L. 2) All values of x within the "bandwidth" must meet the constraint. The function dp(x)/dx does oscillate in and out of the tolerance criteria, so testing the criteria for a single value of x does not work. It must be tested over an interval. The first instance of violation defines the bandwidth. I need to maximize this "bandwidth"/interval. For output, I also need to know which L lead to such an optimization, hence I know the correct a to choose for the given constraints. That is the formal problem statement. (I hope I got it right this time)
Now my problem is setting this whole thing up with MATLAB's optimization tools. I tried to follow ideas from the following articles:
Tutorial for the Optimization Toolbox™
Setting optimize=1 for the if statement will work with the constrained optimization. I thought some how nested optimization is involved, but I couldn't get anything to work. I provided known solutions to the problem from the IMSL optimization library to compare/check with. They are written below the optimization routine. Anyway, here is the code I've put together so far:
function [history] = testing()
% History
history.fval = [];
history.x = [];
history.a = [];
%----------------
% Equations
polynomial = #(x,a) 2*sin(x)*a + 2*sin(2*x)*(9/20 -(4*a)/5) + 2*sin(3*x)*(a/5 - 2/15);
dpdx = #(x,a) 2*cos(x)*a + 4*cos(2*x)*(9/20 -(4*a)/5) + 6*cos(3*x)*(a/5 - 2/15);
% Upper limit of integration
IC = 0.8; % initial
LB = 0; % lower
UB = pi/2; % upper
% Optimization
tol = 0.003;
% Coefficient
% --------------------------------------------------------------------------------------------
dpda = #(x,a) 2*sin(x) + 2*sin(2*x)*(-4/5) + 2*sin(3*x)*1/5;
dEda = #(L,a) -2*integral(#(x) (x-polynomial(x,a)).*dpda(x,a),0,L);
a_of_L = #(L) fzero(#(a)dEda(L,a),0); % Calculate the value of "a" for a given "L"
EXITFLAG = #(L) get_outputs(#()a_of_L(L),3); % Be sure a zero is actually calculated
% NL Constraints
% --------------------------------------------------------------------------------------------
% Equality constraint (No inequality constraints for parent optimization)
ceq = #(L) EXITFLAG(L) - 1; % Just make sure fzero finds unique solution
confun = #(L) deal([],ceq(L));
% Objective function
% --------------------------------------------------------------------------------------------
% (Set optimize=0 to test coefficent equations and proper maximization of L )
optimize = 1;
if optimize
%%%% Plug in solution below
else
% Optimization options
options = optimset('Algorithm','interior-point','Display','iter','MaxIter',500,'OutputFcn',#outfun);
% Optimize objective
objective = #(L) -L;
xsol = fmincon(objective,IC,[],[],[],[],LB,UB,confun,options);
% Known optimized solution from IMSL library
% a = 0.799266;
% lim = pi/2;
disp(['IMSL coeff (a): 0.799266 Upper bound (L): ',num2str(pi/2)])
disp(['code coeff (a): ',num2str(history.a(end)),' Upper bound: ',num2str(xsol)])
end
% http://stackoverflow.com/questions/7921133/anonymous-functions-calling-functions-with-multiple-output-forms
function varargout = get_outputs(fn, ixsOutputs)
output_cell = cell(1,max(ixsOutputs));
[output_cell{:}] = (fn());
varargout = output_cell(ixsOutputs);
end
function stop = outfun(x,optimValues,state)
stop = false;
switch state
case 'init'
case 'iter'
% Concatenate current point and objective function
% value with history. x must be a row vector.
history.fval = [history.fval; optimValues.fval];
history.x = [history.x; x(1)];
history.a = [history.a; a_of_L(x(1))];
case 'done'
otherwise
end
end
end
I could really use some help setting up the constrained optimization. I'm not only new to optimizations, I've never used MATLAB to do so. I should also note that what I have above does not work and is incorrect for the constrained optimization.
UPDATE: I added a for loop in the section if optimizeto show what I'm trying to achieve with the optimization. Obviously, I could just use this, but it seems very inefficient, especially if I increase the resolution of range and have to run this optimization many times. If you uncomment the plots, it will show how the bandwidth behaves. By looping over the full range, I'm basically testing every L but surely there's got to be a more efficient way to do this??
UPDATE: Solved
So it seems fmincon is not the only tool for this job. In fact I couldn't even get it to work. Below, fmincon gets "stuck" on the IC and refuses to do anything...why...that's for a different post! Using the same layout and formulation, fminbnd finds the correct solution. The only difference, as far as I know, is that the former was using a conditional. But my conditional is nothing fancy, and really unneeded. So it's got to have something to do with the algorithm. I guess that's what you get when using a "black box". Anyway, after a long, drawn out, painful, learning experience, here is a solution:
options = optimset('Display','iter','MaxIter',500,'OutputFcn',#outfun);
% Conditional
index = #(L) min(find(abs([dpdx(range(range<=L),a_of_L(L)),inf] - 1) - tol > 0,1,'first'),length(range));
% Optimize
%xsol = fmincon(#(L) -range(index(L)),IC,[],[],[],[],LB,UB,confun,options);
xsol = fminbnd(#(L) -range(index(L)),LB,UB,options);
I would like to especially thank #AndrasDeak for all their support. I wouldn't have figured it out without the assistance!
Summary: This question deals with the improvement of an algorithm for the computation of linear regression.
I have a 3D (dlMAT) array representing monochrome photographs of the same scene taken at different exposure times (the vector IT) . Mathematically, every vector along the 3rd dimension of dlMAT represents a separate linear regression problem that needs to be solved. The equation whose coefficients need to be estimated is of the form:
DL = R*IT^P, where DL and IT are obtained experimentally and R and P must be estimated.
The above equation can be transformed into a simple linear model after applying a logarithm:
log(DL) = log(R) + P*log(IT) => y = a + b*x
Presented below is the most "naive" way to solve this system of equations, which essentially involves iterating over all "3rd dimension vectors" and fitting a polynomial of order 1 to (IT,DL(ind1,ind2,:):
%// Define some nominal values:
R = 0.3;
IT = 600:600:3000;
P = 0.97;
%// Impose some believable spatial variations:
pMAT = 0.01*randn(3)+P;
rMAT = 0.1*randn(3)+R;
%// Generate "fake" observation data:
dlMAT = bsxfun(#times,rMAT,bsxfun(#power,permute(IT,[3,1,2]),pMAT));
%// Regression:
sol = cell(size(rMAT)); %// preallocation
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = polyfit(log(IT(:)),log(squeeze(dlMAT(ind1,ind2,:))),1);
end
end
fittedP = cellfun(#(x)x(1),sol); %// Estimate of pMAT
fittedR = cellfun(#(x)exp(x(2)),sol); %// Estimate of rMAT
The above approach seems like a good candidate for vectorization, since it does not utilize MATLAB's main strength that is MATrix operations. For this reason, it does not scale very well and takes much longer to execute than I think it should.
There exist alternative ways to perform this computation based on matrix division, as demonstrated here and here, which involve something like this:
sol = [ones(size(x)),log(x)]\log(y);
That is, appending a vector of 1s to the observations, followed by mldivide to solve the equation system.
The main challenge I'm facing is how to adapt my data to the algorithm (or vice versa).
Question #1: How can the matrix-division-based solution be extended to solve the problem presented above (and potentially replace the loops I am using)?
Question #2 (bonus): What is the principle behind this matrix-division-based solution?
The secret ingredient behind the solution that includes matrix division is the Vandermonde matrix. The question discusses a linear problem (linear regression), and those can always be formulated as a matrix problem, which \ (mldivide) can solve in a mean-square error sense‡. Such an algorithm, solving a similar problem, is demonstrated and explained in this answer.
Below is benchmarking code that compares the original solution with two alternatives suggested in chat1, 2 :
function regressionBenchmark(numEl)
clc
if nargin<1, numEl=10; end
%// Define some nominal values:
R = 5;
IT = 600:600:3000;
P = 0.97;
%// Impose some believable spatial variations:
pMAT = 0.01*randn(numEl)+P;
rMAT = 0.1*randn(numEl)+R;
%// Generate "fake" measurement data using the relation "DL = R*IT.^P"
dlMAT = bsxfun(#times,rMAT,bsxfun(#power,permute(IT,[3,1,2]),pMAT));
%% // Method1: loops + polyval
disp('-------------------------------Method 1: loops + polyval')
tic; [fR,fP] = method1(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
%% // Method2: loops + Vandermonde
disp('-------------------------------Method 2: loops + Vandermonde')
tic; [fR,fP] = method2(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
%% // Method3: vectorized Vandermonde
disp('-------------------------------Method 3: vectorized Vandermonde')
tic; [fR,fP] = method3(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
function [fittedR,fittedP] = method1(IT,dlMAT)
sol = cell(size(dlMAT,1),size(dlMAT,2));
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = polyfit(log(IT(:)),log(squeeze(dlMAT(ind1,ind2,:))),1);
end
end
fittedR = cellfun(#(x)exp(x(2)),sol);
fittedP = cellfun(#(x)x(1),sol);
function [fittedR,fittedP] = method2(IT,dlMAT)
sol = cell(size(dlMAT,1),size(dlMAT,2));
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = flipud([ones(numel(IT),1) log(IT(:))]\log(squeeze(dlMAT(ind1,ind2,:)))).'; %'
end
end
fittedR = cellfun(#(x)exp(x(2)),sol);
fittedP = cellfun(#(x)x(1),sol);
function [fittedR,fittedP] = method3(IT,dlMAT)
N = 1; %// Degree of polynomial
VM = bsxfun(#power, log(IT(:)), 0:N); %// Vandermonde matrix
result = fliplr((VM\log(reshape(dlMAT,[],size(dlMAT,3)).')).');
%// Compressed version:
%// result = fliplr(([ones(numel(IT),1) log(IT(:))]\log(reshape(dlMAT,[],size(dlMAT,3)).')).');
fittedR = exp(real(reshape(result(:,2),size(dlMAT,1),size(dlMAT,2))));
fittedP = real(reshape(result(:,1),size(dlMAT,1),size(dlMAT,2)));
The reason why method 2 can be vectorized into method 3 is essentially that matrix multiplication can be separated by the columns of the second matrix. If A*B produces matrix X, then by definition A*B(:,n) gives X(:,n) for any n. Moving A to the right-hand side with mldivide, this means that the divisions A\X(:,n) can be done in one go for all n with A\X. The same holds for an overdetermined system (linear regression problem), in which there is no exact solution in general, and mldivide finds the matrix that minimizes the mean-square error. In this case too, the operations A\X(:,n) (method 2) can be done in one go for all n with A\X (method 3).
The implications of improving the algorithm when increasing the size of dlMAT can be seen below:
For the case of 500*500 (or 2.5E5) elements, the speedup from Method 1 to Method 3 is about x3500!
It is also interesting to observe the output of profile (here, for the case of 500*500):
Method 1
Method 2
Method 3
From the above it is seen that rearranging the elements via squeeze and flipud takes up about half (!) of the runtime of Method 2. It is also seen that some time is lost on the conversion of the solution from cells to matrices.
Since the 3rd solution avoids all of these pitfalls, as well as the loops altogether (which mostly means re-evaluation of the script on every iteration) - it unsurprisingly results in a considerable speedup.
Notes:
There was very little difference between the "compressed" and the "explicit" versions of Method 3 in favor of the "explicit" version. For this reason it was not included in the comparison.
A solution was attempted where the inputs to Method 3 were gpuArray-ed. This did not provide improved performance (and even somewhat degradaed them), possibly due to wrong implementation, or the overhead associated with copying matrices back and forth between RAM and VRAM.
I have a model with linear constraints and a nonlinear objective function, and I'm trying to use "fmincon" toolbox of MATLAB to solve it. Actually, the Aineq is a 24*13 matrix, and the Aeq is a 24*13 matrix as well. But when I insert this command:
>> [x , lambda] = fmincon(#MP_ObjF,Aineq,bineq,Aeq,beq);
I encounter this error:
Warning: Trust-region-reflective method does not currently solve this type of
problem, using active-set (line search) instead.
In fmincon at 439??? Error using ==> fmincon at 692
Aeq must have 312 column(s).
What is probably wrong with it? Why should Aeq have 312 columns?!? I will appreciate any help. Thanks.
If you look at the documentation for fmincon (doc fmincon ) you'll see an input called opt.In this you can set the algorithm used by matlab to solve your minimization problem. If you run
Opt=optimset('fmincon');
Then you can modify the algorithm option using
Opt.algorithm="active-set";
Just send Opt to fmincon and then matlab wont have this problem anymore. Take a look inside Opt and you'll find a ton of options you can change to modify the optimization routine.
As for the number of columns. If you're using linear constraints then you input argument for MPobjF must be a column vector with n rows and 1 column. Then A must be m X n. Where M is the number of constraints and n is the number of variables. This is so that matrix multiplication is well defined.
I'm sorry if my first answer was ambiguous. Maybe it will help if I do an example, as I saw several suspicious things in your comments. Lets say we want to minimize x^2 + y^2 + (z-1)^2 subject to x + y + z = 1, 2x + 3y - 4z <= 5, x,y,z>=-5. The solution is obviously (0,0,1)...
We first have to make our objective function,
fun = #(vec) vec[1]^2 + vec[2]^2 + (vec[3]-1)^2;
For fmincon to work, there can only be one input to the function, but that input can be a vector. So here x = vec[1] and so on...I think your comments are indicating that your objective function has multiple inputs. If you need to pass some parameters that aren't being optimized there is documentation for this on Matlab's site (http://www.mathworks.com/help/optim/ug/passing-extra-parameters.html)
Then we can set the optimization settings
opt = optimset('fmincon');
opt.algorithm = 'active-set';
You may also have to modify the large-scale setting for the algorithm warning to go away, I can't remember...
Then we can set
Aeq = [1,1,1]; % equality constraint, if you had another eq constraint, it would be another row to Aeq
beq = 1; % equality constraint
A = [2,3,-4]; % inequality
b = 5; % inequality
lb = [-5;-5;-5]; % lower bound
x0 = [0.5;0.5;0]; % initial feasible guess, needs to be a column vector
[x,fval] = fmincon(fun,x0,A,b,Aeq,beq,lb,[],[],opt);
Then hopefully this finds x = [0;0;1]
I would like to minimize w'Hw, with respect to w, where w is a vector, and H is matrix.
And with the following constraint, |w1|+|w2|+|w3| < 3, ie. the l1 norm of the weights vector is less that 3.
How can I do this in matlab?
Thanks
You're trying to solve a quadratic minimization problem with linear constraints (also known as quadratic programming).
Do you know anything about your matrix H -- in particular, is it positive semidefinite? I would really expect this to be the case, since this is usual for the problem domains in which quadratic programming problems usually crop up.
If H really is positive semidefinite, and your only constraint is |w1|+|w2|+|w3| < 3, then, as Richie Cotton has already pointed out, the minimum is trivially at w=0. Maybe you have some additional constraints?
If you do have additional constraints, but H is still positive semidefinite, there are existing efficient solvers for this class of problem. In MATLAB, take a look at quadprog.
You'll have to reformulate your single nonlinear constraint |w1|+|w2|+|w3| < 3 as a series of linear constraints.
In the one-dimensional case, the constraint |w1| < 1 turns into two linear constraints:
w1 < 1
-w1 < 1.
In the two-dimensional case, the constraint |w1| + |w2| < 1 turns into four linear constraints:
w1+w2 < 1
w1-w2 < 1
-w1+w2 < 1
-w1-w2 < 1
I'll leave the extension to three dimensions to you.
you need to use the optimization toolbox, specifically fmincon:
use fun to establish w'Hw, and you want c(eq) = (|w1|+|w2|+|w3| - 3) <0 which you set with nonlcon (in the documentation).
I'd suggest you look at the fminsearch function in the matlab documentation.
Rasman, below is the fmincon code I am using:
function PortfolioWeights = GMVPC1Type2(SCM)
w0 = [0.1 0.1 0.1 0.1 0.1];
n = length(w0);
options = optimset('Display','final-detailed');
PortfolioWeights = fmincon(#myobj2,w0,[],[],ones(1,n),1,[],[],#myconstraint1,options)
function f = myobj2(w)
w = [w(1);w(2);w(3);w(4);w(5)];
f = w'*SCM*w;
end
end
-----------------------------------------------------------------------------
function [c ceq] = myconstraint1(w)
c = abs(w(1))+abs(w(2))+abs(w(3))+abs(w(4))+abs(w(5))-1
ceq = [];
end
------------------------------------------------------------------------------
I added in options = optimset('Display','final-detailed'); as you suggested. I get the following message:
Optimization stopped because the predicted change in the objective function,
6.115031e-009, is less than options.TolFun = 1.000000e-006, and the maximum constraint
violation, 0.000000e+000, is less than options.TolCon = 1.000000e-006.
Optimization Metric Options
abs(steplength*directional derivative) = 6.12e-009 TolFun = 1e-006 (default)
max(constraint violation) = 0.00e+000 TolCon = 1e-006 (default)
Active inequalities (to within options.TolCon = 1e-006):
lower upper ineqlin ineqnonlin
1
PortfolioWeights =
0.2000 0.2000 0.2000 0.2000 0.2000
The matrix I am using is:
0.000257165759136336 8.48196702102889e-05 9.27141501220362e-05 0.000111360154790061 0.000155196440517440
8.48196702102889e-05 0.000277377166669392 0.000101880007672550 0.000107375764193076 0.000117042329431538
9.27141501220362e-05 0.000101880007672550 0.000300697293925817 0.000112004860252653 0.000134354417344316
0.000111360154790061 0.000107375764193076 0.000112004860252653 0.000311028738698100 0.000147296211557256
0.000155196440517440 0.000117042329431538 0.000134354417344316 0.000147296211557256 0.000376418027192374