Too many indices for array error - scipy.intergate.solve_ivp(method = Radau) gives error - scipy

I am supposed to solve a ODE as follows:
def dHdt(t, H):
if H>H_p(t) and data2_sleep(t)>0:
return -0.323*24
elif H>H_p(t) and data2_sleep(t)==0:
return 0.116*24
elif H<=H_m(t) and data2_sleep(t)>0:
return -0.278*24
elif H<=H_m(t) and data2_sleep(t)==0:
return 0.150*24
elif H<=H_p(t) and H>H_m(t) and data2_sleep(t) > 0:
return -0.274*24
elif H<=H_p(t) and H>H_m(t) and data2_sleep(t) == 0:
return 0.096*24
where H_*p, H_m, data2_*sleep are objects that are results of interpolation using scipy.interp1d .
For the solver, I used solve_ivp but methods as 'RK45' or 'LSODA' were not giving good results (by which I mean , I have a approximate solution at hand and comparing with it , results differ a lot. Moreover, another solver odeint can do much better job except it also is unstable I think) and wanted to use stiff solvers as Radau and BDF but when running below
H_new = solve_ivp(dHdt, t_span = [0, 50], y0 = [H0], method = 'Radau', t_eval = t_span)
I get the following error
IndexError Traceback (most recent call last)
<ipython-input-107-6595d5c7194d> in <module>
----> 1 H_new = solve_ivp(dHdt, t_span = [0, 50], y0 = [H0], method = 'Radau', t_eval = t_span)
5 frames
/usr/local/lib/python3.8/dist-packages/scipy/integrate/_ivp/common.py in _dense_num_jac(fun, t, y, f, h, factor, y_scale)
325 h_vecs = np.diag(h)
326 f_new = fun(t, y[:, None] + h_vecs)
--> 327 diff = f_new - f[:, None]
328 max_ind = np.argmax(np.abs(diff), axis=0)
329 r = np.arange(n)
IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed
indeed H0 is a single number but in the dHdt there is no indexing and indexing issue is raised by solver itself. (Also, only those stiff solvers make this error but if you change method to RK45 or etc it works).
What do you think is the reason for this error?

Turns out for Radau and some solvers output must be in brackets,[]. For other solvers it is okay not to.

Related

Using scipy solve_bvp for a nonhomogeneous ODE

I am trying to solve the following 4th order BVP
y'''' = K - C*y
My x variable is a linspace with 100 nodes. As you can see, K is a vector of the same length=100 and makes the equation nonhomogeneous. When I press solve, however, there is the following error:
Cell In [11], line 18, in fun(x, y)
17 def fun(x, y):
---> 18 ans = vector-np.multiply(C,y[0])
19 return np.vstack((y[1],y[2],y[3],ans))
ValueError: operands could not be broadcast together with shapes (100,) (99,)
Why does the solver suddenly change the length of y by 1 and how can I fix this error?
EDIT: I must add that the solver works fine when K is absent i.e. the equation is homogeneous.
from scipy.integrate import solve_bvp
import numpy as np
L = 10
nodes = 100
A = 1000
B = 1500
C = 0.05
x = np.linspace(0,L,nodes)
vector = np.ones(nodes)
def fun(x, y):
ans = vector-np.multiply(C,y[0])
return np.vstack((y[1],y[2],y[3],ans))
def bc(ya, yb):
return np.array([ya[2], yb[2], ya[3]+A/B, yb[3]])
y_a = np.zeros((4, x.size))
res_a = solve_bvp(fun, bc, x, y_a)
res1 = res_a.sol(x)[0]
res2 = res_a.sol(x)[1]
res3 = B*res_a.sol(x)[2]
res4 = B*res_a.sol(x)[3]
The solver establishes in the first round a system for polynomial approximations over the nodes-1=99 segments of the first subdivision.
There is no guarantee that the subdivision remains unchanged in the later solver rounds. So your ODE right-side function has to work with arbitrary x arrays. This means that parameters given as a function table need to be interpolated for the general x array. There are procedures in numpy.interp for instantaneous interpolation and scipy.interpolate.interp1d to generate interpolation functions.

mle memory error with custom negative log-likelihood function

I am trying to use 'mle' with a custom negative log-likelihood function, but I get the following error:
Requested 1200000x1200000 (10728.8GB) array exceeds maximum array size preference (15.6GB). This might cause MATLAB to become unresponsive.
The data I am using is a 1x1200000 binary array (which I had to convert to double), and the function has 10 arguments: one for the data, 3 known paramenters, and 6 to be optimized. I tried setting 'OptimFun' to both 'fminsearch' and 'fmincon'. Also, optimizing the parameters using 'fminsearch' and 'fminunc' instead of 'mle' works fine.
The problem happens in the 'checkFunErrs' functions, inside the 'mlecustom.m' file (call at line 173, actuall error at line 705).
With 'fminunc' I could calculate the optimal parameters, but it does not give me confidence intervals. Is there a way to circumvent this? Or am I doing something wrong?
Thanks for the help.
T_1 = 50000;
T_2 = 100000;
npast = 10000;
start = [0 0 0 0 0 0];
func = #(x, data, cens, freq)loglike(data, [x(1) x(2) x(3) x(4) x(5) x(6)],...
T_1, T_2, npast);
params = mle(data, 'nloglf', func, 'Start', start, 'OptimFun', 'fmincon');
% Computes the negative log likehood
function out = loglike(data, params, T_1, T_2, npast)
size = length(data);
if npast == 0
past = 0;
else
past = zeros(1, size);
past(npast+1:end) = movmean(data(npast:end-1),[npast-1, 0]); % Average number of events in the previous n years
end
lambda = params(1) + ...
(params(2)*cos(2*pi*(1:size)/T_1)) + ...
(params(3)*sin(2*pi*(1:size)/T_1)) + ...
(params(4)*cos(2*pi*(1:size)/T_2)) + ...
(params(5)*sin(2*pi*(1:size)/T_2)) + ...
params(6)*past;
out = sum(log(1+exp(lambda))-data.*lambda);
end
Your issue is line 228 (as of MATLAB R2017b) of the in-built mle function, which happens just before the custom function is called:
data = data(:);
The input variable data is converted to a column array without warning. This is typically done to ensure that all further calculations are robust to the orientation of the input vector.
However, this is causing you issues, because your custom function assumes data is a row vector, specifically this line:
out = sum(log(1+exp(lambda))-data.*lambda);
Due to implicit expansion, when the row vector lambda and the column vector data interact, you get a huge square matrix per your error message.
Adding these two lines to make it explicit that both are column vectors resolves the issue, avoids implicit expansion, and applies the calculation element-wise as you intended.
lambda = lambda(:);
data = data(:);
So your function becomes
function out = loglike(data, params, T_1, T_2, npast)
N = length(data);
if npast == 0
past = 0;
else
past = zeros(1,N);
past(npast+1:end) = movmean(data(npast:end-1),[npast-1, 0]); % Average number of events in the previous n years
end
lambda = params(1) + ...
(params(2)*cos(2*pi*(1:N)/T_1)) + ...
(params(3)*sin(2*pi*(1:N)/T_1)) + ...
(params(4)*cos(2*pi*(1:N)/T_2)) + ...
(params(5)*sin(2*pi*(1:N)/T_2)) + ...
params(6)*past;
lambda = lambda(:);
data = data(:);
out = sum(log(1+exp(lambda))-data.*lambda);
end
An alternative would be to re-write your function so that it uses column vectors, but you create new row vectors with the (1:N) steps and the concatenation within the movmean. The suggested approach is arguably "lazier", but also robust to row or column inputs.
Note also I've changed your variable name from size to N, since size is an in-built function which you should avoid shadowing.

solving non-linear equations using scipy

I'm trying to solve the following equation:
Where a list of A_e/A* values are given, and gamma=1.2, how should I solve this equation such that a list of M_e is returned corresponding to a list of A_e/A* values?
I thought about using scipy.optimize.newton, but it seems like this is not the right approach
def expr(x):
result = np.arange(1,1.25,step=0.004)-((1/x)*((2/(1.2+1))*(1+((1.2-1)/2)*x**2))**((1.2+1)/(2*1.2-1)))
return result.any()
scipy.optimize.newton(expr,1.1)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_4660/3619719442.py in <module>
----> 1 scipy.optimize.newton(expr,1.1,x1=1.2)
D:\Softwares\Anaconda\lib\site-packages\scipy\optimize\zeros.py in newton(func, x0, fprime, args, tol, maxiter, fprime2, x1, rtol, full_output, disp)
338 " Failed to converge after %d iterations, value is %s."
339 % (itr + 1, p1))
--> 340 raise RuntimeError(msg)
341 warnings.warn(msg, RuntimeWarning)
342 p = (p1 + p0) / 2.0
RuntimeError: Tolerance of 0.09999999999999987 reached. Failed to converge after 1 iterations, value is 1.2.
I used x to denote M_e, and I replaced Ae/A* with a list of values--np.arange(1,1.25,step=0.004) But, I guess the newton's method can only return 1 scalar value, but I defined the function with a sweeping list of Ae/A* values. How should I fix this?
newton works for scalar functions, and you are turning it into a vector function. Since you want the zero for different Ae values, include the Ae parameter in the function definition, then call newton several times (you can use the args keyword):
def expr(x, a):
result = a-((1/x)*((2/(1.2+1))*(1+((1.2-1)/2)*x**2))**((1.2+1)/(2*1.2-1)))
return result
[newton(expr, .01, args=(a,)) for a in np.arange(1,1.25,step=0.004)]
>>[0.9999999999999992, 0.9944379739232752,0.9889506795317498, 0.9835363421148852, ...

BVP4c solve for unknown boundary

I am trying to use bvp4c to solve a system of 4 odes. The issue is that one of the boundaries is unknown.
Can bvp4c handle this? In my code L is the unknown I am solving for.
I get an error message printed below.
function mat4bvp
L = 8;
solinit = bvpinit(linspace(0,L,100),#mat4init);
sol = bvp4c(#mat4ode,#mat4bc,solinit);
sint = linspace(0,L);
Sxint = deval(sol,sint);
end
% ------------------------------------------------------------
function dtdpdxdy = mat4ode(s,y,L)
Lambda = 0.3536;
dtdpdxdy = [y(2)
-sin(y(1)) + Lambda*(L-s)*cos(y(1))
cos(y(1))
sin(y(1))];
end
% ------------------------------------------------------------
function res = mat4bc(ya,yb,L)
res = [ ya(1)
ya(2)
ya(3)
ya(4)
yb(1)];
end
% ------------------------------------------------------------
function yinit = mat4init(s)
yinit = [ cos(s)
0
0
0
];
end
Unfortunately I get the following error message ;
>> mat4bvp
Not enough input arguments.
Error in mat4bvp>mat4ode (line 13)
-sin(y(1)) + Lambda*(L-s)*cos(y(1))
Error in bvparguments (line 105)
testODE = ode(x1,y1,odeExtras{:});
Error in bvp4c (line 130)
bvparguments(solver_name,ode,bc,solinit,options,varargin);
Error in mat4bvp (line 4)
sol = bvp4c(#mat4ode,#mat4bc,solinit);
One trick to transform a variable end point into a fixed one is to change the time scale. If x'(t)=f(t,x(t)) is the differential equation, set t=L*s, s from 0 to 1, and compute the associated differential equation for y(s)=x(L*s)
y'(s)=L*x'(L*s)=L*f(L*s,y(s))
The next trick to employ is to transform the global variable into a part of the differential equation by computing it as constant function. So the new system is
[ y'(s), L'(s) ] = [ L(s)*f(L(s)*s,y(s)), 0 ]
and the value of L occurs as additional free left or right boundary value, increasing the number of variables = dimension of the state vector to the number of boundary conditions.
I do not have Matlab readily available, in Python with the tools in scipy this can be implemented as
from math import sin, cos
import numpy as np
from scipy.integrate import solve_bvp, odeint
import matplotlib.pyplot as plt
# The original function with the interval length as parameter
def fun0(t, y, L):
Lambda = 0.3536;
#print t,y,L
return np.array([ y[1], -np.sin(y[0]) + Lambda*(L-t)*np.cos(y[0]), np.cos(y[0]), np.sin(y[0]) ]);
# Wrapper function to apply both tricks to transform variable interval length to a fixed interval.
def fun1(s,y):
L = y[-1];
dydt = np.zeros_like(y);
dydt[:-1] = L*fun0(L*s, y[:-1], L);
return dydt;
# Implement evaluation of the boundary condition residuals:
def bc(ya, yb):
return [ ya[0],ya[1], ya[2], ya[3], yb[0] ];
# Define the initial mesh with 5 nodes:
x = np.linspace(0, 1, 3)
# This problem has multiple solutions. Try two initial guesses.
L_a=8
L_b=9
y_a = odeint(lambda y,t: fun1(t,y), [0,0,0,0,L_a], x)
y_b = odeint(lambda y,t: fun1(t,y), [0,0,0,0,L_b], x)
# Now we are ready to run the solver.
res_a = solve_bvp(fun1, bc, x, y_a.T)
res_b = solve_bvp(fun1, bc, x, y_b.T)
L_a = res_a.sol(0)[-1]
L_b = res_b.sol(0)[-1]
print "L_a=%.8f, L_b=%.8f" % ( L_a,L_b )
# Plot the two found solutions. The solution are in a spline form, use this to produce a smooth plot.
x_plot = np.linspace(0, 1, 100)
y_plot_a = res_a.sol(x_plot)[0]
y_plot_b = res_b.sol(x_plot)[0]
plt.plot(L_a*x_plot, y_plot_a, label='L=%.8f'%L_a)
plt.plot(L_b*x_plot, y_plot_b, label='L=%.8f'%L_b)
plt.legend()
plt.xlabel("t")
plt.ylabel("y")
plt.grid(); plt.show()
which produces
Trying different initial values for L finds other solutions on quite different scales, among them
L=0.03195111
L=0.05256775
L=0.05846539
L=0.06888907
L=0.08231966
L=4.50411522
L=6.84868060
L=20.01725616
L=22.53189063

Understanding the Jacobian output of scipy.optimize.minimize

I'm working with scipy.optimize.minimize to find the minimum of the RSS for a custom nonlinear function. I'll provide a simple linear example to illustrate what I am doing:
import numpy as np
from scipy import optimize
def response(X, b0, b1, b2):
return b2 * X[1]**2 + b1 * X[0] + b0
def obj_rss(model_params, y_true, X):
return np.sum((y_true - response(X, *model_params))**2)
x = np.array([np.arange(0, 10), np.arange(10, 20)])
r = 15. * x[1]**2 - 32. * x[0] + 10.
init_guess = np.array([0., 50., 10.])
res = optimize.minimize(obj_rss, init_guess, args=(r, x))
print res
This yields the results:
fun: 3.0218799331864133e-08
hess_inv: array([[ 7.50606278e+00, 2.38939463e+00, -8.33333575e-02],
[ 2.38939463e+00, 8.02462363e-01, -2.74621294e-02],
[ -8.33333575e-02, -2.74621294e-02, 9.46969972e-04]])
jac: array([ -3.31359843e-07, -5.42022462e-08, 2.34304025e-08])
message: 'Optimization terminated successfully.'
nfev: 45
nit: 6
njev: 9
status: 0
success: True
x: array([ 10.00066577, -31.99978062, 14.99999243])
And we see that the fitted parameters 10, -32, and 15 are equivalent to those used to generate the actuals data. That's great. Now my question:
I have the understanding that the Jacobian should be an m x n matrix where m is the number of records from the X input and n is the number of parameters. Clearly I don't have that in the results object. The results object yields an array that is referred to as the Jacobian in the documentation (1 and 2), but is only one-dimensional with a number of elements equal to the number of parameters.
Further confusing the matter, when I use method='SLSQP', the Jacobian that is returned has one more element than that returned by other minimization algorithms.
. . .
My larger goal here is to be able to calculate either confidence intervals or standard errors, t-, and p-values for the fitted parameters, so if you think I'm way off track here, please let me know.
EDIT:
The following is intended to show how the SLSQP minimization algorithm yields different results in the Jacobian than the default minimization algorithm, which is one of BFGS, L-BFGS-B, or SLSQP, depending on if the problem has constraints (as mentioned in the documentation). The SLSQP solver is intended for use with constraints.
import numpy as np
from scipy import optimize
def response(X, b0, b1, b2):
return b2 * X[1]**2 + b1 * X[0] + b0
def obj_rss(model_params, y_true, X):
return np.sum((y_true - response(X, *model_params))**2)
x = np.array([np.arange(0, 10), np.arange(10, 20)])
r = 15. * x[1]**2 - 32. * x[0] + 10.
init_guess = np.array([0., 50., 10.])
res = optimize.minimize(obj_rss, init_guess, method='SLSQP', args=(r, x))
print res
r_pred = response(x, *res.x)
Yields results:
fun: 7.5269461938291697e-10
jac: array([ 2.94677643e-05, 5.52844499e-04, 2.59870917e-02,
0.00000000e+00])
message: 'Optimization terminated successfully.'
nfev: 58
nit: 10
njev: 10
status: 0
success: True
x: array([ 10.00004495, -31.9999794 , 14.99999938])
One can see that there is an extra element in the Jacobian array that is returned from the SLSQP solver. I am confused where this comes from.