scipy.optimize.fsolve 'proper array of floats' error - scipy

I need to compute the root of a function and I'm using scipy.optimize.fsolve. However when I call fsolve, sometimes it outputs an error that says 'Result from function call is not a proper array of floats.'
Here's an example of the inputs I'm using:
In [45]: guess = linspace(0.1,1.0,11)
In [46]: alpha_old = 0.5
In [47]: n_old = 0
In [48]: n_new = 1
In [49]: S0 = 0.9
In [50]: fsolve(alpha_eq,guess,args=(n_old,alpha_old,n_new,S0))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
TypeError: array cannot be safely cast to required type
---------------------------------------------------------------------------
error Traceback (most recent call last)
/home/andres/Documents/UdeA/Proyecto/basis_analysis/<ipython-input-50-f1e9a42ba072> in <module>()
----> 1 fsolve(bb.alpha_eq,guess,args=(n_old,alpha_old,n_new,S0))
/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.pyc in fsolve(func, x0, args, fprime, full_output, col_deriv, xtol, maxfev, band, epsfcn, factor, diag)
123 maxfev = 200*(n + 1)
124 retval = _minpack._hybrd(func, x0, args, full_output, xtol,
--> 125 maxfev, ml, mu, epsfcn, factor, diag)
126 else:
127 _check_func('fsolve', 'fprime', Dfun, x0, args, n, (n,n))
error: Result from function call is not a proper array of floats.
In [51]: guess = linspace(0.1,1.0,2)
In [52]: fsolve(alpha_eq,guess,args=(n_old,alpha_old,n_new,S0))
Out[52]: array([ 0.54382423, 1.29716005])
In [53]: guess = linspace(0.1,1.0,3)
In [54]: fsolve(alpha_eq,guess,args=(n_old,alpha_old,n_new,S0))
Out[54]: array([ 0.54382423, 0.54382423, 1.29716005])
There you can see that for 'guess' as defined in In[46] it outputs an error, however for 'guess' as defined in In[51] and in In[53] it works ok. As far as I know both In[46], In[51] and In[53] are the same type of arrays so what's the reason for the error I'm getting in In[50]?
Here are the functions I'm calling in case they're the reason of the problem:
def alpha_eq(alpha2,n1,alpha1,n2,S0):
return overlap(n1,alpha1,n2,alpha2) - S0
def overlap(n1,alpha1,n2,alpha2):
aux1 = sqrt((2.0*alpha1)**(2*n1+3)/factorial(2*n1+2))
aux2 = sqrt((2.0*alpha2)**(2*n2+3)/factorial(2*n2+2))
return aux1 * aux2 * factorial(n1+n2+2) / (alpha1+alpha2)**(n1+n2+3)
(the functions linspace, sqrt and factorial are imported from scipy)
This is a plot of the function for which I'm trying to find the roots.
plot
It seems to me like this is a bug of fsolve, however I want to make sure I'm not making a stupid mistake before reporting it.
If there's something wrong with my code please let me know. Thanks!

I have modified your overlap function for debugging as follows:
def overlap(n1,alpha1,n2,alpha2):
print n1, alpha1, n2, alpha2
aux1 = sqrt((2.0*alpha1)**(2*n1 + 3)/factorial(2*n1 + 2))
aux2 = sqrt((2.0*alpha2)**(2*n2 + 3)/factorial(2*n2 + 2))
ret = aux1 * aux2 * factorial(n1+n2+2) / (alpha1+alpha2)**(n1+n2+3)
print ret, ret.dtype
return ret
And when I try to reproduce your error, here's what happens:
>>> scipy.optimize.fsolve(alpha_eq,guess,args=(n_old,alpha_old,n_new,S0))
0 0.5 1 [ 0.1 0.19 0.28 0.37 0.46 0.55 0.64 0.73 0.82 0.91 1. ]
[ 0.11953652 0.34008953 0.54906314 0.71208678 0.82778065 0.90418052
0.95046505 0.97452352 0.98252708 0.97911263 0.96769965] float64
...
0 0.5 1 [ 0.45613162 0.41366639 0.44818267 0.49222515 0.52879856 0.54371741
0.50642005 0.28700652 -3.72580492 1.81152096 1.41975621]
[ 0.82368346+0.j 0.77371428+0.j 0.81503304+0.j
0.85916030+0.j 0.88922137+0.j 0.89992643+0.j
0.87149667+0.j 0.56353606+0.j 0.00000000+1.21228156j
0.75791881+0.j 0.86627491+0.j ] complex128
So in the process of solving your equation, the square root of a negative number is being calculated, which leads to the complex128 dtype and your error.
With your function, if you are only interested in the zeros, I think you can get rid of the sqrts if you raise S0 to the 4th power:
def alpha_eq(alpha2,n1,alpha1,n2,S0):
return overlap(n1,alpha1,n2,alpha2) - S0**4
def overlap(n1,alpha1,n2,alpha2):
aux1 = (2.0*alpha1)**(2*n1 + 3)/factorial(2*n1 + 2)
aux2 = (2.0*alpha2)**(2*n2 + 3)/factorial(2*n2 + 2)
ret = aux1 * aux2 * factorial(n1+n2+2) / (alpha1+alpha2)**(n1+n2+3)
return ret
And now:
>>> scipy.optimize.fsolve(alpha_eq,guess,args=(n_old,alpha_old,n_new,S0))
array([ 0.92452239, 0.92452239, 0.92452239, 0.92452239, 0.92452239,
0.92452239, 0.92452239, 0.92452239, 0.92452239, 0.92452239,
0.92452239])

Related

SciPy opitimize 'ValueError: setting an array element with a sequence.' ARIMA models

I know that the issue 'ValueError: setting an array element with a sequence.' is normally because the function being optimized is a vector and not a scalar, anyhow my ARMA model below still gets this issue. Here is the code
def loghood(parm,endog,exog,p,q):
arparams,maparams,exogparams,bias = parm
armapredicts=np.zeros(endog.shape[0])
bias=0
res=abs(endog - np.mean(endog))
if p==0:
for i in range(1,endog.shape[0]-p):
armapredicts[i] = np.array([[res[i-f+q]] for f in range(0,q)]).dot(maparams.T) + exog.iloc[i,:].dot(exogparams.T) + bias
if q==0:
for i in range(1,endog.shape[0]-q):
armapredicts[i] = np.array([ [endog[i-f+p]] for f in range(0,p)]).T.dot(arparams) + exog.iloc[i,:].T.dot(exogparams) + bias
else:
for i in range(1,endog.shape[0]-2):
armapredicts[i] = np.array([ [endog[i-f+p]] for f in range(0,p)]).T.dot(arparams.reshape(-1,1)) + np.array([[res[i-f+q]] for f in range(0,q)]).T.dot(maparams.reshape(-1,1)) + exog.iloc[i,:].T.dot(exogparams.reshape(-1,1)) + bias
print(np.array([ [endog[i-f+p]] for f in range(0,p)]).T.shape )
print(maparams.reshape(-1,1).shape)
liklihood=1/((2*np.pi*armapredicts)**(1/2))*np.exp(-res**2/(2*armapredicts**2))
print(liklihood.shape)
log_hood=np.sum(np.log(liklihood.values))
print(log_hood)
return log_hood
x0=[np.ones(2),np.ones(2),np.ones(returnsant.shape[1]-1),1]
x0=np.array(x0,dtype=object).flatten()
res = spop.minimize(loghood,x0 ,args=(returnsant['Hedge Fund'],returnsant.drop('Hedge Fund',axis= 1),2,2), method='Nelder-mead')
print(res)
I know that likelihood is a 1-d vector and the log_hood is certainly scalar after np.sum, so how is this error occurring?? Thank you for your time.
EDIT:forgot to include the full error message
TypeError Traceback (most recent call last)
TypeError: only size-1 arrays can be converted to Python scalars
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
/var/folders/7f/gmpqnwqx0lb4nrsz2vqhvgn40000gp/T/ipykernel_30671/32993526.py in <module>
1 x0=[np.ones(2),np.ones(2),np.ones(returnsant.shape[1]-1),1]
2 #x0=np.array(x0,dtype=object).flatten()
----> 3 res = spop.minimize(loghood,x0 ,args=(returnsant['Hedge Fund'],returnsant.drop('Hedge Fund',axis= 1),2,2), method='Nelder-mead')
4 print(res)
~/opt/anaconda3/lib/python3.9/site-packages/scipy/optimize/_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
609
610 if meth == 'nelder-mead':
--> 611 return _minimize_neldermead(fun, x0, args, callback, bounds=bounds,
612 **options)
613 elif meth == 'powell':
~/opt/anaconda3/lib/python3.9/site-packages/scipy/optimize/optimize.py in _minimize_neldermead(func, x0, args, callback, maxiter, maxfev, disp, return_all, initial_simplex, xatol, fatol, adaptive, bounds, **unknown_options)
687 zdelt = 0.00025
688
--> 689 x0 = asfarray(x0).flatten()
690
691 if bounds is not None:
~/opt/anaconda3/lib/python3.9/site-packages/numpy/core/overrides.py in asfarray(*args, **kwargs)
~/opt/anaconda3/lib/python3.9/site-packages/numpy/lib/type_check.py in asfarray(a, dtype)
112 if not _nx.issubdtype(dtype, _nx.inexact):
113 dtype = _nx.float_
--> 114 return asarray(a, dtype=dtype)
115
116
ValueError: setting an array element with a sequence.
FINAL EDIT:
I resolved the issue by simply creating a big array x0 of all my parameters
x0=np.zeros(5+returnsant.shape[1]-1)
res = spop.minimize(loghood,x0,args=(returnsant['Hedge Fund'],returnsant.drop('Hedge Fund',axis= 1),0,2), method='Nelder-mead')
print(res)
now I declare my subparameters in loghood
def loghood(parms,endog,exog,p,q):
arparams,maparams,exogparams,bias = parms[0:p],parms[p:p+q],parms[p+q:p+q+exog.shape[1]],parms[p+q+exog.shape[1]:-1]
armapredicts=np.zeros(endog.shape[0])
bias=0
res=abs(endog - np.mean(endog))
if p==0:
for i in range(1,endog.shape[0]-q):
armapredicts[i] = np.array([[res[i-f+q]] for f in range(0,q)]).T.dot(maparams) + exog.iloc[i,:].T.dot(exogparams) + bias
if q==0:
for i in range(1,endog.shape[0]-p):
armapredicts[i] = np.array([ [endog[i-f+p]] for f in range(0,p)]).T.dot(arparams) + exog.iloc[i,:].T.dot(exogparams) + bias
else:
for i in range(1,endog.shape[0]-2):
armapredicts[i] = np.array([ [endog[i-f+p]] for f in range(0,p)]).T.dot(arparams.reshape(-1,1)) + np.array([[res[i-f+q]] for f in range(0,q)]).T.dot(maparams.reshape(-1,1)) + exog.iloc[i,:].T.dot(exogparams.reshape(-1,1)) + bias
liklihood=1/((2*np.pi*armapredicts)**(1/2))*np.exp(-res**2/(2*armapredicts**2))
log_hood=np.sum(-np.log(liklihood.squeeze()))
return log_hood

TypeError("can't convert expression to float")

The code which I wrote might look foolish, because it is integration of a derivative function. since it is the basic foundation to the other code which I'm writing on acoustical analysis. this analysis contains integration of different derivative functions which are in multiplication. for this purpose I'm using SciPy for integration and sympy for differentiation. but it is giving an error showing TypeError("can't convert expression to float"). below is the code which I wrote. hoping a solution for this.
import sympy
from sympy import *
from scipy.integrate import quad
var('r')
def diff(r):
r=symbols('x')
Z = 64.25 * r ** 5 - 175.71 *r ** 4 + 170.6 *r ** 3 - 71.103 *r ** 2 + 3 * r
E=sympy.diff(Z,r)
print(E)
return E
R=quad(diff,0,1)[0]
print(R)
I have to say that I'm a bit confused by your statement "integration of a derivative function" since the fundamental theorem of calculus would suggest that this is just a waste of CPU cycles. I'll presume that you know what you're doing though and that you just want to be able to compute some definite integrals numerically...
The SymPy expression that you want to integrate is this:
In [33]: from sympy import *
In [34]: r = symbols("x") # Why are you calling this x?
In [35]: Z = 64.25 * r ** 5 - 175.71 * r ** 4 + 170.6 * r ** 3 - 71.103 * r ** 2 +
...: 3 * r
In [36]: E = diff(Z, r)
In [37]: E
Out[37]:
4 3 2
321.25⋅x - 702.84⋅x + 511.8⋅x - 142.206⋅x + 3
There are a two basic ways to do this with SymPy:
In [38]: integrate(E, (r, 0, 1)) # symbolic integration
Out[38]: -8.96299999999999
In [39]: Integral(E, (r, 0, 1)).evalf() # numeric integration
Out[39]: -8.96300000000002
Note that had you used exact rational numbers you would see a more accurate result in either case:
In [40]: nsimplify(E)
Out[40]:
4 3 2
1285⋅x 17571⋅x 2559⋅x 71103⋅x
─────── - ──────── + ─────── - ─────── + 3
4 25 5 500
In [41]: integrate(nsimplify(E), (r, 0, 1))
Out[41]:
-8963
──────
1000
In [42]: Integral(nsimplify(E), (r, 0, 1)).evalf()
Out[42]: -8.96300000000000
While the approaches above are very accurate and work nicely for this particular integral which is easy to compute both symbolically and numerically they are both slower than using something like scipy's quad function which works with machine precision floating point and efficient numpy arrays for the calculation. To use scipy's quad function you need to lambdify your expression into an ordinary Python function:
In [44]: from scipy.integrate import quad
In [45]: f = lambdify(r, E, "numpy")
In [46]: f(0)
Out[46]: 3.0
In [47]: f(1)
Out[47]: -8.99600000000001
In [48]: quad(f, 0, 1)[0]
Out[48]: -8.963000000000001
What lambdify does is just to generate an efficient Python function for you. You can see the code that it uses like this:
In [51]: import inspect
In [52]: print(inspect.getsource(f))
def _lambdifygenerated(x):
return 321.25*x**4 - 702.84*x**3 + 511.8*x**2 - 142.206*x + 3
The quad routine will pass in numpy arrays for x and so this can be very efficient. If you have high-order polynomials then sympy's horner function can be used to optimise the expression:
In [53]: horner(E)
Out[53]: x⋅(x⋅(x⋅(321.25⋅x - 702.84) + 511.8) - 142.206) + 3.0
In [54]: f2 = lambdify(r, horner(E), "numpy")
In [56]: print(inspect.getsource(f2))
def _lambdifygenerated(x):
return x*(x*(x*(321.25*x - 702.84) + 511.8) - 142.206) + 3.0
https://docs.sympy.org/latest/tutorial/calculus.html#integrals
https://docs.sympy.org/latest/modules/utilities/lambdify.html#sympy.utilities.lambdify.lambdify
https://docs.sympy.org/latest/modules/polys/reference.html#sympy.polys.polyfuncs.horner

Openmdao V1.7 Sellar MDF

I foound out something strange with the MDA of sellar problem on the doc page of OpenMDAO (http://openmdao.readthedocs.io/en/1.7.3/usr-guide/tutorials/sellar.html)
If I extract the code and only run the MDA (adding counters in the disciplines), I observe that the number of calls is differents between disciplines (twice the number of d2 for d1 discipline) which is not expected . Does someone has an answer ?
Here is the results
Coupling vars: 25.588303, 12.058488
Number of discipline 1 and 2 calls (10,5)
And here is the code
# For printing, use this import if you are running Python 2.x from __future__ import print_function
import numpy as np
from openmdao.api import Component from openmdao.api import ExecComp, IndepVarComp, Group, NLGaussSeidel, \
ScipyGMRES
class SellarDis1(Component):
"""Component containing Discipline 1."""
def __init__(self):
super(SellarDis1, self).__init__()
# Global Design Variable
self.add_param('z', val=np.zeros(2))
# Local Design Variable
self.add_param('x', val=0.)
# Coupling parameter
self.add_param('y2', val=1.0)
# Coupling output
self.add_output('y1', val=1.0)
self.execution_count = 0
def solve_nonlinear(self, params, unknowns, resids):
"""Evaluates the equation
y1 = z1**2 + z2 + x1 - 0.2*y2"""
z1 = params['z'][0]
z2 = params['z'][1]
x1 = params['x']
y2 = params['y2']
unknowns['y1'] = z1**2 + z2 + x1 - 0.2*y2
self.execution_count += 1
def linearize(self, params, unknowns, resids):
""" Jacobian for Sellar discipline 1."""
J = {}
J['y1','y2'] = -0.2
J['y1','z'] = np.array([[2*params['z'][0], 1.0]])
J['y1','x'] = 1.0
return J
class SellarDis2(Component):
"""Component containing Discipline 2."""
def __init__(self):
super(SellarDis2, self).__init__()
# Global Design Variable
self.add_param('z', val=np.zeros(2))
# Coupling parameter
self.add_param('y1', val=1.0)
# Coupling output
self.add_output('y2', val=1.0)
self.execution_count = 0
def solve_nonlinear(self, params, unknowns, resids):
"""Evaluates the equation
y2 = y1**(.5) + z1 + z2"""
z1 = params['z'][0]
z2 = params['z'][1]
y1 = params['y1']
# Note: this may cause some issues. However, y1 is constrained to be
# above 3.16, so lets just let it converge, and the optimizer will
# throw it out
y1 = abs(y1)
unknowns['y2'] = y1**.5 + z1 + z2
self.execution_count += 1
def linearize(self, params, unknowns, resids):
""" Jacobian for Sellar discipline 2."""
J = {}
J['y2', 'y1'] = .5*params['y1']**-.5
#Extra set of brackets below ensure we have a 2D array instead of a 1D array
# for the Jacobian; Note that Jacobian is 2D (num outputs x num inputs).
J['y2', 'z'] = np.array([[1.0, 1.0]])
return J
class SellarDerivatives(Group):
""" Group containing the Sellar MDA. This version uses the disciplines
with derivatives."""
def __init__(self):
super(SellarDerivatives, self).__init__()
self.add('px', IndepVarComp('x', 1.0), promotes=['x'])
self.add('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])
self.add('d1', SellarDis1(), promotes=['z', 'x', 'y1', 'y2'])
self.add('d2', SellarDis2(), promotes=['z', 'y1', 'y2'])
self.add('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0, y1=0.0, y2=0.0),
promotes=['obj', 'z', 'x', 'y1', 'y2'])
self.add('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['y1', 'con1'])
self.add('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])
self.nl_solver = NLGaussSeidel()
self.nl_solver.options['atol'] = 1.0e-12
self.ln_solver = ScipyGMRES()
from openmdao.api import Problem, ScipyOptimizer
top = Problem() top.root = SellarDerivatives()
#top.driver = ScipyOptimizer()
#top.driver.options['optimizer'] = 'SLSQP'
#top.driver.options['tol'] = 1.0e-8
#
#top.driver.add_desvar('z', lower=np.array([-10.0, 0.0]),
# upper=np.array([10.0, 10.0]))
#top.driver.add_desvar('x', lower=0.0, upper=10.0)
#
#top.driver.add_objective('obj')
#top.driver.add_constraint('con1', upper=0.0)
#top.driver.add_constraint('con2', upper=0.0)
top.setup()
# Setting initial values for design variables top['x'] = 1.0 top['z'] = np.array([5.0, 2.0])
top.run()
print("\n")
print("Coupling vars: %f, %f" % (top['y1'], top['y2']))
count1 = top.root.d1.execution_count
count2 = top.root.d2.execution_count
print("Number of discipline 1 and 2 calls (%i,%i)"% (count1,count2))
This is a good observation. Whenever you have a cycle, the "head" component runs a second time. The reason is as follows:
If you have a model with components that contain implicit states, a single execution looks like this:
Call solve_nonlinear to execute components
Call apply_nonlinear to calculate the residuals.
We don't have any components with implicit states in this model, but we indirectly created the need for one by having a cycle. Our execution looks like this:
Call solve_nonlinear to execute all components.
Call apply_nonlinear (which caches the unknowns, calls solve_nolinear, and saves the difference in unknowns) on just the "head" component to generate a residual that we can converge.
Here, the head component is just the first component that is executed based on however it determines what order to run the cycle in. You can verify that only a single head component gets extra runs by building a cycle with more than 2 components.

Cryptic TypeError: 'decimal.Decimal' object cannot be interpreted as an integer

I am struggling to understand why this function apparently fails in the Jupyter Notebook, but not in the IPython shell:
def present_value( r, n, fv = None, pmt = None ):
'''
Function to compute the Present Value based on interest rate and
a given future value.
Arguments accepted
------------------
* r = interest rate,
which should be given in its original percentage, eg.
5% instead of 0.05
* n = number of periods for which the cash flow,
either as annuity or single flow from one present value
* fv = future value in dollars,
if problem is annuity based, leave this empty
* pmt = each annuity payment in dollars,
if problem is single cash flow based, leave this empty
'''
original_args = [r, n, fv, pmt]
dec_args = [Decimal( arg ) if arg != None
else arg
for arg in original_args
]
if dec_args[3] == None:
return dec_args[2] / ( ( 1 + ( dec_args[0] / 100 ) )**dec_args[1] )
elif dec_args[2] == None:
# annuity_length = range( 1, dec_args[1] + 1 )
# Not allowed to add a Decimal object
# with an integer and to use it
# in the range() function,
# so we dereference the integer from original_args
annuity_length = range( 1, original_args[1] + 1 )
# Apply discounting to each annuity payment made
# according to number of years left till end
all_compounded_pmt = [dec_args[3] * ( 1 / ( ( 1 + dec_args[0] / 100 ) ** time_left ) ) \
for time_left in annuity_length
]
return sum( all_compounded_pmt )
When I imported the module that this function resides in, named functions.py, using from functions import *, and then executed present_value(r=7, n=35, pmt = 11000), I got the error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-93-c1cc587f7e27> in <module>()
----> 1 present_value(r=7, n=35, pmt = 11000)
/path_to_file/functions.py in present_value(r, n, fv, pmt)
73 if dec_args[3] == None:
74 return dec_args[2]/((1 + (dec_args[0]/100))**dec_args[1])
---> 75
76 elif dec_args[2] == None:
77 # annuity_length = range(1, dec_args[1]+1)
TypeError: 'decimal.Decimal' object cannot be interpreted as an integer
but in the IPython shell, evaluating this function it works perfectly fine:
In [42]: functions.present_value(r=7, n=35, pmt = 11000)
Out[42]: Decimal('142424.39530474029537')
Can anyone please help me with this really confusing and obscure issue?

Input argument "b" is undefined

i am new in matlab and search everything. I am writing a the function. i could not able to understand why this error is comning :"Input argument "b" is undefined." . shall i intialise b =0 ? whereas it is the parameter coming from input console. my code:
function f = evenorodd( b )
%UNTITLED2 Summary of this function goes here
%zohaib
% Detailed explanation goes here
%f = b;%2;
f = [0 0];
f = rem(b,2);
if f == 0
disp(b+ 'is even')
else
disp(b+ 'is odd')
end
console:
??? Input argument "b" is undefined.
Error in ==> evenorodd at 6
f = rem(b,2);
From what I see, this is what you are trying to do:
function f = evenorodd( b )
f = rem(b,2);
if f == 0
fprintf('%i is even\n', b)
else
fprintf('%i is odd\n', b)
end
=======================
>> evenorodd(2);
2 is even
No need to initialize f as [0,0].
In MATLAB, you cant concatenate a number and string with + operator. Use fprintf.
The above function evenorodd takes one argument (integer) and returns 0 or 1.