Matlab lsqnonlin() exitflag=4 - matlab

I am optimizing some test data using lsqnonlin (i.e. data simulated from known parameter values).
maturity=[1 3 6 9 12 15 18 21 24 30 36 48 60 72 84 96 108 120]'; %maturities
options=optimset('Algorithm',{'levenberg-marquardt',.01},'Display','iter','TolFun',10^(-20),'TolX',10^-3,'MaxFunEvals',10000,'MaxIter',10000); %LM
vp0=[0.99 0.94 0.84 0.0802 -0.0144 -0.0042 0.001693 0.004094 0.003256 log(0.000960765^2) 0.077]'; %LM
[vpML,resnorm,residual,exitflag,output,lambda,jacobian]=lsqnonlin(#(vp) DNS_LL_LM(vp,y,maturity),vp0,[],[],options); %LM
I want the convergence to occur when the norm of the parameter vector changes by 10^-6.
As 'TolX' refers to the raw changes in the parameter vector I use 10^-3 as the tolerance of X which when squared would gives the desired norm of 10^-6.
However I find that when I run the code the exitflag keeps coming up as exitflag=4: "Magnitude of search direction was smaller than the specified tolerance."
But there is nowhere to set the tolerance for the search direction?
In the options you can only set: "TolX" and "TolFun"?
http://www.mathworks.co.uk/help/optim/ug/lsqnonlin.html#f265106
So how can I force the optimization to keep running till my desired convergence criterion?
Kind Regards
Baz

OK I went into the code and there seems to be some disconnect between what the exitflags as described here:
http://www.mathworks.co.uk/help/optim/ug/lsqnonlin.html#f265106
For example exitflag 2 which in the link above is supposed to relate to the change in x being less than tolerance is in fact used here to indicate that the Jacobian is undefined
if undefJac
EXITFLAG = 2;
msgFlag = 26;
msgData = {'levenbergMarquardt',msgFlag,verbosity > 0,detailedExitMsg,caller, ...
[], [], []};
done = true;
The description of exitflag 4 on the mathworks page is a little vague but you can see what it is doing below:
if norm(step) < tolX*(sqrtEps + norm(XOUT))
EXITFLAG = 4;
msgData = {'levenbergMarquardt',EXITFLAG,verbosity > 0,detailedExitMsg,caller, ...
norm(step)/(sqrtEps+norm(XOUT)),optionFeedback.TolX,tolX};
done = true;
Seems that it it testing if the norm of the stepsize is less than the tolerance of X times the norm of X. This is along the lines of what I want, and can easily be changed to give me exactly what I want.

Related

Matlab - Chien Search implementation initialisation

I'm trying to implement a DVBS2 (48408, 48600) BCH decoder and I'm having troubles with finding the roots of the locator polynomial. For the Chien search here the author initialises the registers taking into account the fact that it is shortened subtracting 48600 from (2^16 - 1). Why so?
This is the code I have so far:
function [error_locations, errors] = compute_chien_search(n, locator_polynomial, field, alpha_powers, n_max)
t = length(locator_polynomial);
error_locations = zeros(t, 1);
errors = 0;
% Init the registers with the locator polynomial coefficients.
coefficient_buffer = locator_polynomial;
alpha_degrees = uint32(1:t)';
alpha_polynoms = field(alpha_degrees);
alpha_polynoms = [1; alpha_polynoms];
for i = 1:n %
for j = 2:t
coefficient_buffer(j) = gf_mul_elements(coefficient_buffer(j), ...
alpha_polynoms(j), ...
field, alpha_powers, n_max);
end
% Compute locator polynomial at position i
tmp = 0;
for j = 2:t
tmp = bitxor(tmp, coefficient_buffer(j));
end
% Signal the error
if tmp == 1
errors = errors + 1;
error_locations(errors) = n_max - i + 1;
end
end
end
It almost gives me the correct result except for some error locations. For example: for errors made in positions
418 14150 24575 25775 37403
The code above gives me
48183 47718 34451 24026 22826
which after subtracting from 48600 gives:
417 882 14149 24574 25774
which is the position minus 1, except for the 37403, which it did not find.
What am I missing?
Edit:
The code in question is a DVBS2 12 error correcting 48408, 48600 BCH code. The generator polynomial has degree 192 and is given by multiplying the 12 minimal polynomials given on the standard’s documentation.
Update - I created an example C program using Windows | Visual Studio for BCH(48600,48408). On my desktop (Intel 3770K 3.5 ghz, Win 7, VS 2015), encode takes about 30 us, a 12 bit error correction takes about 4.5 ms. On my laptop, (Intel Core i7-10510U up to 4.9 ghz, Win 10, VS 2019), 12 bit error correction takes about 3.0 ms. I used a carryless multiply intrinsic to simplify generating the 192 bit polynomial, but this is a one time generated constant. Encode uses a [256][3] 64 bit unsigned integer polynomial (192 bits) table and decode uses a [256][12] 16 bit unsigned integer syndrome table, to process a byte at a time.
The code includes both Berlekamp Massey and Sugiyama extended Euclid decoders that I copied from existing RS code I have. For BCH (not RS) code, the Berlekamp Massey discrepancy will be zero on odd steps, so for odd steps, the discrepancy is not calculated (the iteration count since last update is incremented, the same as when a calculated discrepancy is zero). I didn't see a significant change in running time, but I left the check in there.
The run times are about the same for BM or Euclid.
https://github.com/jeffareid/misc/blob/master/bch48600.c
I suspect an overflow problem in the case of a failure at bit error index 37403, since it is the only bit index > 2^15-1 (32767). There is this comment on that web site:
This code is great. However, it does not work for the large block sizes in the DVB-S2
specification. For example, it doesn't work with:
n = 16200;
n_max = 65535;
k_max = 65343;
t = 12;
prim_poly = 65581;
The good news is that the problem is easy to fix. Replace all the uint16() functions
in the code with uint32(). You will also have to run the following Matlab function
once. It took several hours for gftable() to complete on my computer.
gftable(16, 65581); (hex 1002D => x^16 + x^5 + x^3 + x^2 + 1)
The Chien search should be looking for values (1/(2^(0))) to (1/(2^(48599))), then zero - log of those values to get offsets relative to the right most bit of the message, and 48599-offset to get indexes relative to the left most bit of the message.
If the coefficients of the error locator polynomial are reversed, then the Chien search would be looking for values 2^(0) to 2^(48599).
https://en.wikipedia.org/wiki/Reciprocal_polynomial

Calculate integral in Matlab [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I am given the following data:
and I am asked to calculate the integral of Cp/T dT from 113.7 to 264.4.
I am unsure of how I should solve this. If I want to use the integral command, I need a function, but I don't know how my function should be in that case.
I have tried:
function func = Cp./T
T = [...]
Cp=[...]
end
but that didn't work.
Use the cumtrapz function in MATLAB.
T = [...]
Cp=[...]
CpdivT = Cp./T
I = cumtrapz(T, CpdivT)
You can read more about the function at https://www.mathworks.com/help/matlab/ref/cumtrapz.html
A simple approach using interp1 and integral using plain vanilla settings.
Would only use more sophisticated numerical techniques if required for application. You can examine the 'RelTol' and 'AbsTol' options in the documentation for integral.
Numerical Integration: (w/ linear interpolation)
% MATLAB R2017a
T = [15 20 30 40 50 70 90 110 130 140 160 180 200 220 240 260 270 275 285 298]';
Cp = [5.32 10.54 21.05 30.75 37.15 49.04 59.91 70.04 101.59 103.05 106.78 ...
110.88 114.35 118.70 124.31 129.70 88.56 90.07 93.05 96.82]';
fh =#(t) interp1(T,Cp./T,t,'linear');
t1 = 113.7;
t2 = 264.4;
integral(fh,t1,t2)
ans = 91.9954
Alternate Methods of Interpolation:
Your results will depend on your method of interpolation (see code & graphic below).
% Results depend on method of interpolation
Linear = integral(#(t) interp1(T,Cp./T,tTgts,'linear'),t1,t2) % = 91.9954
Spline = integral(#(t) interp1(T,Cp./T,tTgts,'spline'),t1,t2) % = 92.5332
Cubic = integral(#(t) interp1(T,Cp./T,tTgts,'pchip'),t1,t2) % = 92.0383
Code for graphic:
tTgts = T(1):.01:T(end);
figure, hold on, box on
p(1) = plot(tTgts,interp1(T,Cp./T,tTgts,'linear'),'b-','DisplayName','Linear')
p(2) = plot(tTgts,interp1(T,Cp./T,tTgts,'spline'),'r-','DisplayName','Spline')
p(3) = plot(tTgts,interp1(T,Cp./T,tTgts,'pchip'),'g-','DisplayName','Cubic')
p(4) = plot(T,Cp./T,'ks','DisplayName','Data')
xlim([floor(t1) ceil(t2)])
legend('show')
% Cosmetics
xlabel('T')
ylabel('Cp/T')
for k = 1:4, p(k).LineWidth = 2; end
A poor approximation: (to get rough order of magnitude estimate)
tspace = T(2:end)-T(1:end-1);
midpt = mean([Cp(1:end-1) Cp(2:end)],2);
sum(midpt.*tspace)./sum(tspace)
And you can see we're in the ballpark (makes me feel more comfortable at least).
Other viable MATLAB Functions: quadgk | quad
% interpolation method affects answer if using `interp1()`
quadgk(#(t) interp1(T,Cp./T,t,'linear'),t1,t2)
quad(#(t) interp1(T,Cp./T,t,'linear'),t1,t2)
Functions that would require more work: trapz | cumtrapz
Notice that trapz and cumtrapz both require unit spacing; to use these would require first interpolating with unit spacing.
Related Posts: (found after answer already completed)
Matlab numerical integration
How to numerically integrate vector data in Matlab?
This is probably better for your problem. Take note that I have assumed 2nd order polynomial fits your data well. You may want to get a better model structure if the fit is unsatisfactory.
% Data
T = [15 20 30 40 50 70 90 110 130 140 160 180 200 220 240 260 270 275 285 298];
Cp = [5.32 10.54 21.05 30.75 37.15 49.04 59.91 70.04 101.59 103.05 106.78 110.88 114.35 118.70 124.31 129.70 88.56 90.07 93.05 96.82];
% Fit function using 2nd order polynomial
f = fit(T',Cp'./T','poly2');
% Compare fit to actual data
plot(f,T,Cp./T)
% Create symbolic function
syms x
func = f.p1*x*x + f.p2*x + f.p3;
% Integrate function
I = int(func,113.7,264.4);
% Convert solution from symbolic to numeric value
V = double(I);
The result is 92.7839

linear combination of curves to match a single curve with integer constraints

I have a set of vectors (curves) which I would like to match to a single curve. The issue isnt only finding a linear combination of the set of curves which will most closely match the single curve (this can be done with least squares Ax = B). I need to be able to add constraints, for example limiting the number of curves used in the fitting to a particular number, or that the curves lie next to each other. These constraints would be found in mixed integer linear programming optimization.
I have started by using lsqlin which allows constraints and have been able to limit the variable to be > 0.0, but in terms of adding further constraints I am at a loss. Is there a way to add integer constraints to least squares, or alternatively is there a way to solve this with a MILP?
any help in the right direction much appreciated!
Edit: Based on the suggestion by ErwinKalvelagen I am attempting to use CPLEX and its quadtratic solvers, however until now I have not managed to get it working. I have created a minimal 'notworking' example and have uploaded the data here and code here below. The issue is that matlabs LS solver lsqlin is able to solve, however CPLEX cplexlsqnonneglin returns CPLEX Error 5002: %s is not convex for the same problem.
function [ ] = minWorkingLSexample( )
%MINWORKINGLSEXAMPLE for LS with matlab and CPLEX
%matlab is able to solve the least squares, CPLEX returns error:
% Error using cplexlsqnonneglin
% CPLEX Error 5002: %s is not convex.
%
%
% Error in Backscatter_Transform_excel2_readMut_LINPROG_CPLEX (line 203)
% cplexlsqnonneglin (C,d);
%
load('C_n_d_2.mat')
lb = zeros(size(C,2),1);
options = optimoptions('lsqlin','Algorithm','trust-region-reflective');
[fact2,resnorm,residual,exitflag,output] = ...
lsqlin(C,d,[],[],[],[],lb,[],[],options);
%% CPLEX
ctype = cellstr(repmat('C',1,size(C,2)));
options = cplexoptimset;
options.Display = 'on';
[fact3, resnorm, residual, exitflag, output] = ...
cplexlsqnonneglin (C,d);
end
I could reproduce the Cplex problem. Here is a workaround. Instead of solving the first model, use a model that is less nonlinear:
The second model solves fine with Cplex. The problem is somewhat of a tolerance/numeric issue. For the second model we have a much more well-behaved Q matrix (a diagonal). Essentially we moved some of the complexity from the objective into linear constraints.
You should now see something like:
Tried aggregator 1 time.
QP Presolve eliminated 1 rows and 1 columns.
Reduced QP has 401 rows, 443 columns, and 17201 nonzeros.
Reduced QP objective Q matrix has 401 nonzeros.
Presolve time = 0.02 sec. (1.21 ticks)
Parallel mode: using up to 8 threads for barrier.
Number of nonzeros in lower triangle of A*A' = 80200
Using Approximate Minimum Degree ordering
Total time for automatic ordering = 0.00 sec. (3.57 ticks)
Summary statistics for Cholesky factor:
Threads = 8
Rows in Factor = 401
Integer space required = 401
Total non-zeros in factor = 80601
Total FP ops to factor = 21574201
Itn Primal Obj Dual Obj Prim Inf Upper Inf Dual Inf
0 3.3391791e-01 -3.3391791e-01 9.70e+03 0.00e+00 4.20e+04
1 9.6533667e+02 -3.0509942e+03 1.21e-12 0.00e+00 1.71e-11
2 6.4361775e+01 -3.6729243e+02 3.08e-13 0.00e+00 1.71e-11
3 2.2399862e+01 -6.8231454e+01 1.14e-13 0.00e+00 3.75e-12
4 6.8012056e+00 -2.0011575e+01 2.45e-13 0.00e+00 1.04e-12
5 3.3548410e+00 -1.9547176e+00 1.18e-13 0.00e+00 3.55e-13
6 1.9866256e+00 6.0981384e-01 5.55e-13 0.00e+00 1.86e-13
7 1.4271894e+00 1.0119284e+00 2.82e-12 0.00e+00 1.15e-13
8 1.1434804e+00 1.1081026e+00 6.93e-12 0.00e+00 1.09e-13
9 1.1163905e+00 1.1149752e+00 5.89e-12 0.00e+00 1.14e-13
10 1.1153877e+00 1.1153509e+00 2.52e-11 0.00e+00 9.71e-14
11 1.1153611e+00 1.1153602e+00 2.10e-11 0.00e+00 8.69e-14
12 1.1153604e+00 1.1153604e+00 1.10e-11 0.00e+00 8.96e-14
Barrier time = 0.17 sec. (38.31 ticks)
Total time on 8 threads = 0.17 sec. (38.31 ticks)
QP status(1): optimal
Cplex Time: 0.17sec (det. 38.31 ticks)
Optimal solution found.
Objective : 1.115360
See here for some details.
Update: In Matlab this becomes:

fsolve/fzero: No solution found, appears regular

I am trying to do the following algorithm using fsolve or fzero:
K5=8.37e-2
P=1
Choose an A
S2=(4*K5/A)^(2/3)
S6=3*S2
S8=4*S2
SO2 = (5*P)/149 - (101*S2)/149 - (293*S6)/149 - (389*S8)/149
H2O = (40*P)/149 + (556*S2)/447 + (636*S6)/149 + (2584*S8)/447
H2S = 2*SO2
newA = (H2O)^2/(SO2)^3
Repeat until newA=oldA
The main thing to solve is K5=1/4 * A * S2^3/2. It is from this that S2 is calculated in the first place.
So here is what I did in Matlab:
function MultipleNLEexample
clear, clc, format short g, format compact
Aguess = 300000; % initial guess
options = optimoptions('fsolve','Display','iter','TolFun',[1e-9],'TolX',[1e-9]); % Option to display output
xsolv=fsolve(#MNLEfun,Aguess,options);
[~,ans]=MNLEfun(xsolv)
%- - - - - - - - - - - - - - - - - - - - - -
function varargout = MNLEfun(A);
K5 = 8.37e-2;
S2 = (4*K5/A)^(2/3);
S6 = 3*S2;
S8 = 4*S2;
P=1; %atm
SO2 = (5*P)/149 - (101*S2)/149 - (293*S6)/149 - (389*S8)/149;
H2O = (40*P)/149 + (556*S2)/447 + (636*S6)/149 + (2584*S8)/447;
newA=H2O^2/SO2^3;
fx=1/4*newA*S2^(3/2)-K5;
varargout{1} = fx;
if nargout>1
H2S = 2*SO2;
varargout{2} = ((2*S2+6*S6+8*S8)/(2*S2+6*S6+8*S8+H2S+SO2)*100);
end
I cannot get my code to run, I get the following error:
No solution found.
fsolve stopped because the problem appears regular as measured by the gradient,
but the vector of function values is not near zero as measured by the
selected value of the function tolerance.
I have tried setting the tolerances as low as 1e-20 but that did not change anything.
The way your system is set up, it is actually convenient to plot it and observe its behavior. I vectorized your function and plottedf(x) = MLNEfun(x)-x, where the output of MLNE(x) is newA. Effectively, you are interested in a fixed point of your system.
What I observe is this:
There is a singularity, and a root crossing, at A ~ 3800. We can use fzero since it is a bracketed root solver, and give it very tight bounds on the solution fzero(#(x)MLNEfun(x)-x, [3824,3825]) which produces 3.8243e+03. This is a couple order of magnitudes from your starting guess. There is no solution to your system near ~3e5.
Update
In my haste, I failed to zoom in on the plot, which shows another (well-behaved) root at 1.3294e+04. It is up to you to decide which is the physically meaningful one(s). Everything I say below still applies. Just start your guess near the solution you're interested in.
In response to comment
Since you want to perform this for varying values of K, then your best bet is to stick with fzero so long as you are solving for one variable, instead of fsolve. The reason behind this is that fsolve uses variants of Newton's method, which are not bracketed and will struggle to find solutions at singular points like this. fzero on the other hand, uses Brent's method which is guaranteed to find a root (if it exists) within a bracket. It is also much more well behaved near singular points.
MATLAB's implementation of fzero also searches for a bracketing interval before carrying out Brent's method. So, if you provide a single starting guess sufficiently close to the root, it should find it for you. Sample output from fzero below:
fzero(#(x)MLNEfun(x)-x, 3000, optimset('display', 'iter'))
Search for an interval around 3000 containing a sign change:
Func-count a f(a) b f(b) Procedure
1 3000 -616789 3000 -616789 initial interval
3 2915.15 -433170 3084.85 -905801 search
5 2880 -377057 3120 -1.07362e+06 search
7 2830.29 -311972 3169.71 -1.38274e+06 search
9 2760 -241524 3240 -2.03722e+06 search
11 2660.59 -171701 3339.41 -3.80346e+06 search
13 2520 -109658 3480 -1.16164e+07 search
15 2321.18 -61340.4 3678.82 -1.7387e+08 search
17 2040 -29142.6 3960 2.52373e+08 search
Search for a zero in the interval [2040, 3960]:
Func-count x f(x) Procedure
17 2040 -29142.6 initial
18 2040.22 -29158.9 interpolation
19 3000.11 -617085 bisection
20 3480.06 -1.16224e+07 bisection
21 3960 2.52373e+08 bisection
22 3720.03 -4.83826e+08 interpolation
....
87 3824.32 -5.46204e+48 bisection
88 3824.32 1.03576e+50 bisection
89 3824.32 1.03576e+50 interpolation
Current point x may be near a singular point. The interval [2040, 3960]
reduced to the requested tolerance and the function changes sign in the interval,
but f(x) increased in magnitude as the interval reduced.
ans =
3.8243e+03

Issue integrating inside ode45 loop

When I try to use this Matlab code it goes in an infinite loop. I am trying to perform integration inside ode45:
clear
clc
options = odeset('RelTol',1e-4,'AbsTol',[1e-4 1e-4 1e-5]);
[T,Y] = ode45(#rigid,[0 12],[0 1 1],options);
plot(T,Y(:,1),'+',T,Y(:,2),'*',T,Y(:,3),'.')
function dy = rigid(t,y)
dy = zeros(3,1); % a column vector
dy(1) = y(2) ;
dy(2) = -y(1) * y(3);
fun = #(t) exp(-t.^2).*log(t).^2+y(1);
q = integral(fun,0,Inf);
dy(3) = y(2) * y(3) + q;
There is no "infinite loop." Your function just takes a very long time to integrate. Try setting tspan to [0 1e-7]. It appears to be a high frequency oscillation, but I don't know if your equations are correct (that's a math question rather than a programming one). Such systems are hard to integrate accurately (ode15 might be a better choice), let alone quickly.
You also didn't bother to mention the important fact that the call to integral is generating a warning message:
Warning: Minimum step size reached near x = 1.75484e+22. There may be a
singularity, or the tolerances may be too tight for this problem.
> In funfun/private/integralCalc>checkSpacing at 457
In funfun/private/integralCalc>iterateScalarValued at 320
In funfun/private/integralCalc>vadapt at 133
In funfun/private/integralCalc at 84
In integral at 88
In rtest1>rigid at 17
In ode15s at 580
In rtest1 at 5
Printing out warning messages on each iteration greatly slows down integration. There's a good reason for this warning. You do realize that the function that you're evaluating with integral from 0 to Inf is equivalent to the following, right?
sqrt(pi)*((eulergamma + log(4))^2/8 + pi^2/16) + Inf*y(1)
where eulergamma is psi(1) or double(sym('eulergamma')). Your integrand doesn't converge.
If you like, can try to avoid the warning message in one of two ways.
1. Turn off the the warning (being sure to re-enable it afterwards). You can do that with the following code:
...
warning('OFF','MATLAB:integral:MinStepSize');
[T,Y] = ode45(#rigid,[0 12],[0 1 1],options);
warning('ON','MATLAB:integral:MinStepSize');
...
You can obtain the the ID for a waring via the lastwarn function.
2. The other option might be to change your integration bounds and avoid the warning altogether, e.g.:
...
q = integral(fun,0,1e20);
...
This may or may not be acceptable, but integral is not be returning the correct solution either because the result doesn't converge.