SciPy.optimize.least_squares() Objective Function Questions - scipy

I am trying to minimize a highly non-linear function by optimizing three unknown parameters a, b, and c0. I'm attempting to replicate some governing equations of a casino roulette ball in Python 3.
Here is the link to the research paper:
http://www.dewtronics.com/tutorials/roulette/documents/Roulette_Physik.pdf
I will be referencing equations (35) and (40) in the paper.
Basically, I take stopwatch lap measurements of the roulette ball spinning on the wheel. For each successive lap, the lap time will increase because of losses of momentum to non-conservative forces of friction. Then I take these time measurements and fit equation (35) using a Levenberg-Marquardt least squares method in equation (40).
My question is twofold:
(1) I'm using the scipy.optimize.least_squares() method='lm', and I'm not sure how to write the objective function! Right now I have the function written exactly as is in the paper:
def fall_time(k,a,b,c0):
F = (1 / (a * b)) * (c0 - np.arcsinh(c0) * np.exp(a * k * 2 * np.pi))
return F
def parameter_estimation_function(x0,tk):
a = x0[0]
b = x0[1]
c0 = x0[2]
S = 0
for i,t in enumerate(tk):
k = i + 1
S += (t - fall_time(k,a,b,c0))**2
return [S,1,1]
sol = least_squares(parameter_estimation_function,[0.1,0.8,-0.1],args=([tk1]),method='lm',jac='2-point',max_nfev=2000)
print(sol)
Now, in the documentation examples, I never saw the objective function written the way I have it. In the documentation, the objective function is always returns the residual, not the square of the residual. Additionally, in the documentation they never use the sum! So I'm wondering if the sum and the square are automatically handled under the hood of least_squares()?
(2) Perhaps my second question is a result of my failure to understand how to write the objective function. But anyhow, I'm having trouble getting the algorithm to converge on the minimum. I know this is because the levenberg alogrithm is "greedy" and stops near the closest minima, but I figured that I would be able to at least converge on about the same result given different initial guesses. With slight alterations in the initial guess, I'm getting parameter results with different signs. Additionally, I've yet to find a combination of initial guesses that allows the algo to converge! It always times out before it finds the solution. I've even increased the amount of function evaluations to 10,000 to see if it would. To no avail!
Perhaps somebody could shed some light on my mistakes here! I'm still relatively new to python and the scipy library!
Here is some sample data for tk that I've measured myself from the video here: https://www.youtube.com/watch?v=0Zj_9ypBnzg
tk = [0.52,1.28,2.04,3.17,4.53,6.22]
tk1 = [0.51,1.4,2.09,3,4.42,6.17]
tk2 = [0.63,1.35,2.19,3.02,4.57,6.29]
tk3 = [0.63,1.39,2.23,3.28,4.70,6.32]
tk4 = [0.57,1.4,2.1,3.06,4.53,6.17]
Thanks

1) Yes, as you suspected the sum and the square of the residuals are automatically handled.
2) Hard to say, since I'm not deeply familiar with the problem (e.g., how many local minima exist, what constitutes a 'reasonable' result, etc.). I may investigate more later.
But for kicks I fiddled with some of the values to see what would happen. For example, you can just replace the 1/b constant with a standalone variable b_inv, and this seemed to stabilize the results quite a bit. Here's the code I used to check results. (Note that I rewrote the objective function for brevity. It simply leverages the element-wise operations of numpy arrays, without changing the overall result.)
import numpy as np
from scipy.optimize import least_squares
def fall_time(k,a,b_inv,c0):
return (b_inv / a) * (c0 - np.arcsinh(c0) * np.exp(a * k * 2 * np.pi))
def parameter_estimation_function(x,tk):
return np.asarray(tk) - fall_time(k=np.arange(1,len(tk)+1), a=x[0],b_inv=x[1],c0=x[2])
tk_samples = [
[0.52,1.28,2.04,3.17,4.53,6.22],
[0.51,1.4,2.09,3,4.42,6.17],
[0.63,1.35,2.19,3.02,4.57,6.29],
[0.63,1.39,2.23,3.28,4.70,6.32],
[0.57,1.4,2.1,3.06,4.53,6.17]
]
for i in range(len(tk_samples)):
sol = least_squares(parameter_estimation_function,[0.1,1.25,-0.1],
args=(tk_samples[i],),method='lm',jac='2-point',max_nfev=2000)
print(sol.x)
with console output:
[ 0.03621789 0.64201913 -0.12072879]
[ 3.59319972e-02 1.17129458e+01 -6.53358716e-03]
[ 3.55516005e-02 1.48491493e+01 -5.31098257e-03]
[ 3.18068316e-02 1.11828091e+01 -7.75329834e-03]
[ 3.43920725e-02 1.25160378e+01 -6.36307506e-03]

Related

Declaring a functional recursive sequence in Matlab

I'd like to declare first of all, that I'm a mathematician. This might be a stupid stupid question; but I've gone through all the matlab tutorials--they've gotten me nowhere. I imagine I could code this in C (it'd be exhausting); but I need matlab for this particular function. And I don't get exactly how to do it.
Here is the pasted Matlab code of where I'm running into trouble:
function y = TAU(z,n)
y=0;
for i =[1,n]
y(z) = log(beta(z+1,i) + y(z+1)) - beta(z,i);
end
end
(beta is an arbitrary "float" to "float" function with an index i.)
I'm having trouble declaring y as a function, in which we call the function at a different argument. I want to define y_n(z) with something something y_{n-1}(z+1). This is all done in a recursive process to create the function. I really feel like I'm missing something stupid.
As a default function it assigns y to be an array (or whatever you call the default index assignment). But I don't want an array. I want y to be assigned as a "function" class (i.e. takes "float" to "float"). And then I'm defining a sequence of y_n : "float" to "float". So that z to z+1 is a map on "float" to "float".
I don't know if I'm asking too much of matlab...
Help a poor mathematician who hasn't coded since the glory days of X-box mods.
...Please don't tell me I have to go back to Pari-GP/C drawing boards over something so stupid.
Please help!
EDIT: At rahnema1 & mimocha's request, I'll describe the math, and of what I am trying to do with my program. I can't see how to implement latex in here. So I'll write the latex code in a generator and upload a picture. I'm not so sure if there even is a work around to what I want to do.
As to the expected output. We'd want,
beta(z+1,i) + TAU(z+1,i) = exp(beta(z,i) + TAU(z,i+1))
And we want to grow i to a fixed value n. Again, I haven't programmed in forever, so I apologize if I'm speaking a little nonsensically.
EDIT2:
So, as #rahnema1 suggests; I should produce a reproducible example. In order to do this, I'll write the code for my beta function. It's surprisingly simple. This is for the case where the "multiplier" variable is set to log(2); but you don't need to worry about any of that.
function f = beta(z,n)
f=0;
for i = 0:n-1
f = exp(f)/(1+exp(log(2)*(n-i-z)));
end
end
This will work fine for z a float no greater than 4. Once you make z larger it'll start to overflow. So for example, if you put in,
beta(2,100)
1.4242
beta(3,100)
3.3235
beta(3,100) - exp(beta(2,100))/(1/4+1)
0
The significance of the 100, is simply how many iterations we perform; it converges fast so even setting this to 15 or so will still produce the same numerical accuracy. Now, the expected output I want for TAU is pretty straight forward,
TAU(z,1) = log(beta(z+1,1)) - beta(z,1)
TAU(z,2) = log(beta(z+1,2) + TAU(z+1,1)) - beta(z,2)
TAU(z,3) = log(beta(z+1,3) + TAU(z+1,2)) - beta(z,3)
...
TAU(z,n) = log(beta(z+1,n) + TAU(z+1,n-1)) -beta(z,n)
I hope this helps. I feel like there should be an easy way to program this sequence, and I must be missing something obvious; but maybe it's just not possible in Matlab.
At mimocha's suggestion, I'll look into tail-end recursion. I hope to god I don't have to go back to Pari-gp; but it looks like I may have to. Not looking forward to doing a deep dive on that language, lol.
Thanks, again!
Is this what you are looking for?
function out = tau(z,n)
% Ends recursion when n == 1
if n == 1
out = log(beta(z+1,1)) - beta(z,1);
return
end
out = log(beta(z+1,n) + tau(z+1,n-1)) - beta(z,n);
end
function f = beta(z,n)
f = 0;
for i = 0:n-1
f = exp(f) / (1 + exp(log(2)*(n-i-z)));
end
end
This is basically your code from the most recent edit, but I've added a simple catch in the tau function. I tried running your code and noticed that n gets decremented infinitely (no exit condition).
With the modification, the code runs successfully on my laptop for smaller integer values of n, where 1e5 > n >= 1; and for floating values of z, real and complex. So the code will unfortunately break for floating values of n, since I don't know what values to return for, say, tau(1,0) or tau(1,0.9). This should easily be fixable if you know the math though.
However, many of the values I get are NaNs or Infs. So I'm not sure if your original problem was Out of memory error (infinite recursion), or values blowing up to infinity / NaN (numerical stability issue).
Here is a quick 100x100 grid calculation I made with this code.
Then I tested on negative values of z, and found the imaginary part of the output to looks kinda cool.
Not to mention I'm slightly geeking out over the fact that pi is showing up in the imaginary part as well :)
tau(-0.3,2) == -1.45179335740446147085 +3.14159265358979311600i

Indefinite integration with Matlab's Symbolic Toolbox - complex solution

I'm using Matlab 2014b. I've tried:
clear all
syms x real
assumeAlso(x>=5)
This returned:
ans =
[ 5 <= x, in(x, 'real')]
Then I tried:
int(sqrt(x^2-25)/x,x)
But this still returned a complex answer:
(x^2 - 25)^(1/2) - log(((x^2 - 25)^(1/2) + 5*i)/x)*5*i
I tried the simplify command, but still a complex answer. Now, this might be fixed in the latest version of Matlab. If so, can people let me know or offer a suggestion for getting the real answer?
The hand-calculated answer is sqrt(x^2-25)-5*asec(x/5)+C.
This behavior is present in R2017b, though when converted to floating point the imaginary components are different.
Why does this occur?
This occurs because Matlab's int function returns the full general solution when you ask for the indefinite integral. This solution is valid over the entire domain of of real values, including your restricted domain of x>=5.
With a bit of math you can show that the solution is always real for x>=5 (see complex logarithm). Or you can use more symbolic math via the isAlways function to show this:
syms x real
assume(x>=5)
y = int(sqrt(x^2-25)/x, x)
isAlways(imag(y)==0)
This returns true (logical 1). Unfortunately, Matlab's simplification routines appear to not be able to reduce this expression when assumptions are included. You might also submit this case to The MathWorks as a service request in case they'd consider improving the simplification for this and similar equations.
How can this be "fixed"?
If you want to get rid of the zero-valued imaginary part of the solution you can use sym/real:
real(y)
which returns 5*atan2(5, (x^2-25)^(1/2)) + (x^2-25)^(1/2).
Also, as #SardarUsama points out, when the full solution is converted to floating point (or variable precision) there will sometimes numeric imprecision when converting from exact symbolic form. Using the symbolic real form above should avoid this.
The answer is not really complex.
Take a look at this:
clear all; %To clear the conditions of x as real and >=5 (simple clear doesn't clear that)
syms x;
y = int(sqrt(x^2-25)/x, x)
which, as we know, gives:
y =
(x^2 - 25)^(1/2) - log(((x^2 - 25)^(1/2) + 5i)/x)*5i
Now put some real values of x≥5 to check what result it gives:
n = 1004; %We'll be putting 1000 values of x in y from 5 to 1004
yk = zeros(1000,1); %Preallocation
for k=5:n
yk(k-4) = subs(y,x,k); %Putting the value of x
end
Now let's check the imaginary part of the result we have:
>> imag(yk)
ans =
1.0e-70 *
0
0
0
0
0.028298997121333
0.028298997121333
0.028298997121333
%and so on...
Notice the multiplier 1e-70.
Let's check the maximum value of imaginary part in yk.
>> max(imag(yk))
ans =
1.131959884853339e-71
This implies that the imaginary part is extremely small and it is not a considerable amount to be worried about. Ideally it may be zero and it's coming due to imprecise calculations. Hence, it is safe to call your result real.

Using SciPy's quad to get the principal value of an integral by integrating to just below and from just above the singular point

I am trying to compute the principal value of an integral (over s) of 1/((s - q02)*(s - q2)) on [Ecut, inf] with q02 < Ecut < q2. Doing the Principle value by hand (or Mathematica) one obtains the general result
ln((q2-Ecut)/(Ecut-q02)) / (q02 -q2)
In the specific example below this gives the result -1.58637*10^-11. One should also be able to get the same result by splitting the integral in two, integrating up to q2 - eps and then starting from q2 + eps, and then adding the two results (the divergences should cancel). By taking eps smaller and smaller one should recover the result above. When I implement this in scipi using quad however my result converges to the wrong result 6.04685e-11, as I show in the plot of eps vs integral result I include.
Why is quad doing this? even if I have eps = 0 it gives me this wrong result, when I would expect it to give me an error as the thing blows up...
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
q02 = 485124412.
Ecut = 17909665929.
q2 = 90000000000.
def integrand(s):
return 1/((s - q02)*(s - q2))
xx=[1.,0.1,0.01,0.001,0.0001,0.00001,0.000001,0.0000001,0.00000001,
0.000000001,0.0000000001,0.00000000001,0.]
integral = [0*y for y in xx]
i=0
for eps in xx:
ans1,err = quad(integrand, Ecut, q2 -eps )
ans2,err= quad(integrand, q2 + eps, np.inf)
integral[i] = ans1 + ans2
i=i+1
plt.semilogx(xx,integral,marker='.')
plt.show()
One should also be able to get the same result by splitting the integral in two, integrating up to q2 - eps and then starting from q2 + eps, and then adding the two results
Only if computations were perfectly accurate. In numerical practice, what you described is basically the worst thing one could do. You get two large integrals of opposite signs that very nearly cancel each other when added; what is left has more to do with the errors of integration than with actual value of the integral.
I notice you disregarded the error values err in your script, not even printing them out. Bad idea: they are of size 1e-10, which would already tell you that the final result with "something e-11" is junk.
The Computational Science question Numerical Principal Value Integration - Hilbert like addresses this issue. One of the approaches they indicate is to add the values of the integrand at the points symmetric about the singularity, before trying to integrate it. This requires taking the integral over a symmetric interval centered at the singularity q2 (that is, from Ecut to 2*q2-Ecut), and then adding the contribution of the integral from 2*q2-Ecut to infinity. This split makes sense anyway, because quad treats infinite limits very differently (using Fourier integration), which is yet another thing that will affect the way the singularity cancels out.
So, an implementation of this approach would be
ans1, err = quad(lambda s: integrand(s) + integrand(2*q2-s), Ecut, q2)
ans2, err = quad(integrand, 2*q2-Ecut, np.inf)
No eps is needed. However, the result is still off: it's about -2.5e-11. Turns out, the second integral is the culprit. Unfortunately, the Fourier integral approach doesn't seem to be effective here (or I didn't find a way to make it work). It turns out that providing a large, but finite value as the upper limit leads to a better result, especially if the option epsabs is also used, e.g. epsabs=1e-20.
Better yet, read the documentation of quad extra carefully and notice that it directly supports integrals with Cauchy weight 1/(s-q2), choosing an appropriate numerical method for them. This still requires a finite upper limit, and a small value of epsabs, but the result is pretty accurate:
quad(lambda s: 1/(s - q02), Ecut, 1e9*q2, weight='cauchy', wvar=q2, epsabs=1e-20)
returns -1.5863735715967363e-11, compared to exact value -1.5863735704856253e-11. Notice that the factor 1/(s-q2) does not appear in the integrand above, being relegated to the weight options.

Issue with Matlab solve function?

The following command
syms x real;
f = #(x) log(x^2)*exp(-1/(x^2));
fp(x) = diff(f(x),x);
fpp(x) = diff(fp(x),x);
and
solve(fpp(x)>0,x,'Real',true)
return the result
solve([0.0 < (8.0*exp(-1.0/x^2))/x^4 - (2.0*exp(-1.0/x^2))/x^2 -
(6.0*log(x^2)*exp(-1.0/x^2))/x^4 + (4.0*log(x^2)*exp(-1.0/x^2))/x^6],
[x == RD_NINF..RD_INF])
which is not what I expect.
The first question: Is it possible to force Matlab's solve to return the set of all solutions?
(This is related to this question.) Moreover, when I try to solve the equation
solve(fpp(x)==0,x,'Real',true)
which returns
ans =
-1.5056100417680902125994180096313
I am not satisfied since all solutions are not returned (they are approximately -1.5056, 1.5056, -0.5663 and 0.5663 obtained from WolframAlpha).
I know that vpasolve with some initial guess can handle this. But, I have no idea how I can generally find initial guessed values to obtain all solutions, which is my second question.
Other solutions or suggestions for solving these problems are welcomed.
As I indicated in my comment above, sym/solve is primarily meant to solve for analytic solutions of equations. When this fails, it tries to find a numeric solution. Some equations can have an infinite number of numeric solutions (e.g., periodic equations), and thus, as per the documentation: "The numeric solver does not try to find all numeric solutions for [the] equation. Instead, it returns only the first solution that it finds."
However, one can access the features of MuPAD from within Matlab. MuPAD's numeric::solve function has several additional capabilities. In particular is the 'AllRealRoots' option. In your case:
syms x real;
f = #(x)log(x^2)*exp(-1/(x^2));
fp(x) = diff(f(x),x);
fpp(x) = diff(fp(x),x);
s = feval(symengine,'numeric::solve',fpp(x)==0,x,'AllRealRoots')
which returns
s =
[ -1.5056102995536617698689500437312, -0.56633904710786569620564475006904, 0.56633904710786569620564475006904, 1.5056102995536617698689500437312]
as well as a warning message.
My answer to this question provides other way that various MuPAD solvers can be used, particularly if you can isolate and bracket your roots.
The above is not going to directly help with your inequalities other than telling you where the function changes sign. For those you could try:
s = feval(symengine,'solve',fpp(x)>0,x,'Real')
which returns
s =
(Dom::Interval(0, Inf) union Dom::Interval(-Inf, 0)) intersect solve(0 < 2*log(x^2) - 3*x^2*log(x^2) + 4*x^2 - x^4, x, Real)
Try plotting this function along with fpp.
While this is not a bug per se, The MathWorks still might be interested in this difference in behavior and poor performance of sym/solve (and the underlying symobj::solvefull) relative to MuPAD's solve. File a bug report if you like. For the life of me I don't understand why they can't better unify these parts of Matlab. The separation makes not sense from the perspective of a user.

vectorizing loops in Matlab - performance issues

This question is related to these two:
Introduction to vectorizing in MATLAB - any good tutorials?
filter that uses elements from two arrays at the same time
Basing on the tutorials I read, I was trying to vectorize some procedure that takes really a lot of time.
I've rewritten this:
function B = bfltGray(A,w,sigma_r)
dim = size(A);
B = zeros(dim);
for i = 1:dim(1)
for j = 1:dim(2)
% Extract local region.
iMin = max(i-w,1);
iMax = min(i+w,dim(1));
jMin = max(j-w,1);
jMax = min(j+w,dim(2));
I = A(iMin:iMax,jMin:jMax);
% Compute Gaussian intensity weights.
F = exp(-0.5*(abs(I-A(i,j))/sigma_r).^2);
B(i,j) = sum(F(:).*I(:))/sum(F(:));
end
end
into this:
function B = rngVect(A, w, sigma)
W = 2*w+1;
I = padarray(A, [w,w],'symmetric');
I = im2col(I, [W,W]);
H = exp(-0.5*(abs(I-repmat(A(:)', size(I,1),1))/sigma).^2);
B = reshape(sum(H.*I,1)./sum(H,1), size(A, 1), []);
Where
A is a matrix 512x512
w is half of the window size, usually equal 5
sigma is a parameter in range [0 1] (usually one of: 0.1, 0.2 or 0.3)
So the I matrix would have 512x512x121 = 31719424 elements
But this version seems to be as slow as the first one, but in addition it uses a lot of memory and sometimes causes memory problems.
I suppose I've made something wrong. Probably some logic mistake regarding vectorizing. Well, in fact I'm not surprised - this method creates really big matrices and probably the computations are proportionally longer.
I have also tried to write it using nlfilter (similar to the second solution given by Jonas) but it seems to be hard since I use Matlab 6.5 (R13) (there are no sophisticated function handles available).
So once again, I'm asking not for ready solution, but for some ideas that would help me to solve this in reasonable time. Maybe you will point me what I did wrong.
Edit:
As Mikhail suggested, the results of profiling are as follows:
65% of time was spent in the line H= exp(...)
25% of time was used by im2col
How big are I and H (i.e. numel(I)*8 bytes)? If you start paging, then the performance of your second solution is going to be affected very badly.
To test whether you really have a problem due to too large arrays, you can try and measure the speed of the calculation using tic and toc for arrays A of increasing size. If the execution time increases faster than by the square of the size of A, or if the execution time jumps at some size of A, you can try and split the padded I into a number of sub-arrays and perform the calculations like that.
Otherwise, I don't see any obvious places where you could be losing lots of time. Well, maybe you could skip the reshape, by replacing B with A in your function (saves a little memory as well), and writing
A(:) = sum(H.*I,1)./sum(H,1);
You may also want to look into upgrading to a more recent version of Matlab - they've worked hard on improving performance.