I am trying to minimize a function using scipy.optimize. Here is my programme and the last line is the error message.
import sympy as s
from scipy.optimize import minimize
x,y,z=s.symbols('x y z')
f= lambda z: x**2-y**2
bnds = ((70,None),(4,6))
res = minimize(lambda z: fun(*x),(70,4), bounds=bnds)
<lambda>() argument after * must be an iterable, not Symbol
How to convert symbol to an iterable or define an iterable directly ?
In Python, calling a function with f(*x) means f(x[0], x[1], ...). That is it expects x to a tuple (or other iterable), and the function should have a definition like
def f(*args):
<use args tuple>
I'm not quite sure what you are trying to do with the sympy code, or why you are using it instead of defining a function in Python/numpy directly.
A function like:
def f(z):
x,y = z # expand it to 2 variables
return x**2 - y**2
should work in a minimize call with:
minimize(f, (10,3))
which will vary x and y starting with (10,3) seeking to minimize the f value.
In [20]: minimize(f, (70,4), bounds=((70,None),(4,6)))
Out[20]:
fun: 4864.0
hess_inv: <2x2 LbfgsInvHessProduct with dtype=float64>
jac: array([ 139.99988369, -11.99996404])
message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 9
nit: 1
status: 0
success: True
x: array([ 70., 6.])
Related
I am new with Numba. I am trying to accelerate a pretty complicated solver. However, I keep getting an error such as
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Untyped global name 'isinstance': Cannot determine Numba type of <class 'builtin_function_or_method'>
I wrote a small example to reproduce the same error:
import numba
import numpy as np
from numba import types
from numpy import zeros_like, isfinite
from numpy.linalg import solve
from numpy.random import uniform
#numba.njit(parallel=True)
def foo(A_, b_, M1=None, M2=None):
x_ = zeros_like(b_)
r = b_ - A_.dot(x_)
flag = 1
if isinstance(M1, types.NoneType): # Error here
y = r
else:
y = solve(M1, r)
if not isfinite(y).any():
flag = 2
if isinstance(M2, types.NoneType):
z = y
else:
z = solve(M2, y)
if not isfinite(z).any():
flag = 2
return z, flag
N = 10
tmp = np.random.rand(N, N)
A = np.dot(tmp, tmp.T)
x = np.zeros((N, 1), dtype=np.float64)
b = np.vstack([uniform(0.0, 1.0) for i in range(N)])
X_1, info = foo(A, b)
Also if I change the decorator to generated_jit() I get the following error:
r = b_ - A_.dot(x_)
AttributeError: 'Array' object has no attribute 'dot'
Numba compiles the function and requires every variables to be statically typed. This means that each variable has only one unique type: one variable cannot be of both the type NoneType and something else as opposed to with CPython based on dynamic typing. Dynamic typing is also a major source of the slowdown of CPython. Thus, using isinstance in nopython JITed Numba functions does not make much sense. In fact, this built-in function is not supported.
That being said, Numba supports optional arguments by specifying optional(ArgumentType) in the signature (note that the resulting type of the variable is optional(ArgumentType) and not ArgumentType nor NoneType. You can then test if the argument is set using if yourArgument is None:. I do not know what is the type of M1 and M2 in your code but they need to be explicitly defined in the signature with optional argument.
I'd like to write a more or less universial fit function for general function
$f_i = \sum_i a_i exp(-t/tau_i)$
for some data I have.
Below is an example code for a biexponential function but I would like to be able to fit a monoexponential or a triexponential function with the smallest code adaptions possible.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
t = np.linspace(0, 10, 100)
a_1 = 1
a_2 = 1
tau_1 = 5
tau_2 = 1
data = 1*np.exp(-t/5) + 1*np.exp(-t/1)
data += 0.2 * np.random.normal(size=t.size)
def func(t, a_1, tau_1, a_2, tau_2): # plus more exponential functions
return a_1*np.exp(-t/tau_1)+a_2*np.exp(-t/tau_2)
popt, pcov = curve_fit(func, t, data)
print(popt)
plt.plot(t, data, label="data")
plt.plot(t, func(t, *popt), label="fit")
plt.legend()
plt.show()
In principle I thought of redefining the function to a general form
def func(t, a, tau): # with a and tau as a list
tmp = 0
tmp += a[i]*np.exp(-t/tau[i])
return tmp
and passing the arguments to curve_fit in the form of lists or tuples. However I get a TypeError as shown below.
TypeError: func() takes 4 positional arguments but 7 were given
Is there anyway to rewrite the code that you can only by the input parameters of curve_fit "determine" the degree of the multiexponential function? So that passing
a = (1)
results in a monoexponential function whereas passing
a = (1, 2, 3)
results in a triexponential function?
Regards
Yes, that can be done easily with np.broadcasting:
def func(t, a, taus): # plus more exponential functions
a=np.array(a)[:,None]
taus=np.array(taus)[:,None]
return (a*np.exp(-t/taus)).sum(axis=0)
func accepts 2 lists, converts them into 2-dim np.array, computes a matrix with all the exponentials and then sums it up. Example:
t=np.arange(100).astype(float)
out=func(t,[1,2],[0.3,4])
plt.plot(out)
Keep in mind a and taus must be the same length, so sanitize your inputs as you see fit. Or you could also directly pass np.arrays instead of lists.
I am trying to take the derivative of a function including a boolean variable with sympy.
My expected result:
Two different derivatives, depending on the boolean being either True or False (i.e. 1 or 0).
Example:
import sympy as sy
c, x = sy.symbols("c x", positive=True, real=True)
bo = sy.Function("bo")
fct1 = sy.Function("fct1")
fct2 = sy.Function("fct2")
FOC2 = sy.Function("FOC2")
y = 5
a = 2
b = 4
def fct1(x):
return -0.004*x**2 + 0.25*x + 4
# the following gives the smaller positive intercept with the x-axis)
# this intercept is the threshold value for the boolean function, bo
min(sy.solve(fct1(x)-y, x))
def bo(x):
if fct1(x) <= y:
return 1
else:
return 0
def fct2(c, x):
return a + b*c + bo(x)*c
def FOC2(c, x):
return sy.diff(fct2(c, x), c)
print(FOC2(c, x))
The min-function after the comments shows me the threshold of x for bo being True or False would be 4.29..., thus positive and real.
Output:
TypeError: cannot determine truth value of Relation
I understand that the truth value depends on x, which is a symbol. Thus, without knowing x one cannot determine bo.
But how would I get my expected result, where bo is symbolic?
First off, I would advise you to carefully consider what is going on in your code the way it is pasted above. You first define a few sympy functions, e.g.
fct1 = sy.Function("fct1")
So after this, fct1 is an undefined sympy.Function - undefined in the sense that it is neither specified what its arguments are, nor what the function looks like.
However, then you define same-named functions explicitly, as in
def fct1(x):
return -0.004*x**2 + 0.25*x + 4
Note however, that at this point, fct1 ceases to be a sympy.Function, or any sympy object for that matter: you overwrite the old definition, and it is now just a regular python function!
This is also the reason that you get the error: when you call bo(x), python tries to evaluate
-0.004*x**2 + 0.25*x + 4 <= 5
and return a value according to your definition of bo(). But python does not know whether the above is true (or how to make that comparison), so it complains.
I would suggest 2 changes:
Instead of python functions, as in the code, you could simply use sympy expressions, e.g.
fct1 = -0.004*x**2 + 0.25*x + 4
To get the truth value of your condition, I would suggest to use the Heaviside function (wiki), which evaluates to 0 for a negative argument, and to 1 for positive. Its implementation in sympy is sympy.Heaviside.
Your code could then look as follows:
import sympy as sy
c, x = sy.symbols("c x", positive=True, real=True)
y = 5
a = 2
b = 4
fct1 = -0.004*x**2 + 0.25*x + 4
bo = sy.Heaviside(y - fct1)
fct2 = a + b*c + bo * c
FOC2 = sy.diff(fct2, c)
print(FOC2)
Two comments on the line
bo = sy.Heaviside(y - fct1)
(1) The current implementation does not evaluate sympy.Heaviside(0)by default; this is beacause there's differing definitions around (some define it to be 1, others 1/2). You'd want it to be 1, to be in accordance with the (weak) inequality in the OP. In sympy 1.1, this can be achieved by passing an additional argument to Heaviside, namely whatever you want Heaviside(0) to evaluate to:
bo = sy.Heaviside(y - fct1, 1)
This is not supported in older versions of sympy.
(2) You will get your FOC2, again involving a Heaviside term. What I like about this, is that you could keep working with this expression, say if you wanted to take a second derivative and so on. If, for the sake of readability, you would prefer a piecewise expression - no problem. Just replace the according line with
bo = sy.Heaviside(y - fct1)._eval_rewrite_as_Piecewise(y-fct1)
Which will translate to a piecewise function automatically. (note that under older versions, this automatically implicitly uses Heaviside(0) = 0.5 - best to use (1) and (2) together:
bo = sy.Heaviside(y - fct1, 1)._eval_rewrite_as_Piecewise(y-fct1)
Unfortunately, I don't have a working sympy 1.1 at my hands right now and can only test the old code.
One more noteconcerning sympy's piecewise functions: they are much more readable if using sympy's latex printing, by inserting
sy.init_printing()
early in the code.
(Disclaimer: I am by no means an expert in sympy, and there might be other, preferable solutions out there. Just trying to make a suggestion!)
I have an optimization problem (1d) coded in 2 ways - one using a for loop and an other using numpy arrays. The for loop version works fine but the numpy one fails.
Actually it is a bit more complicated, it can work with different starting points (!!) or if I choose an other optimization algo like CG.
The 2 versions (functions and gradients) are giving the same results and the returned types are also the same as far as I can tell.
Here is my example, what am I missing?
import numpy as np
from scipy.optimize import minimize
# local params
v1 = np.array([1., 1.])
v2 = np.array([1., 2.])
# local functions
def f1(x):
s = 0
for i in range(len(v1)):
s += (v1[i]*x-v2[i])**2
return 0.5*s/len(v1)
def df1(x):
g = 0
for i in range(len(v1)):
g += v1[i]*(v1[i]*x-v2[i])
return g/len(v1)
def f2(x):
return 0.5*np.sum((v1*x-v2)**2)/len(v1)
def df2(x):
return np.sum(v1*(v1*x-v2))/len(v1)
x0 = 10. # x0 = 2 works
# tests...
assert np.abs(f1(x0)-f2(x0)) < 1.e-6 and np.abs(df1(x0)-df2(x0)) < 1.e-6 \
and np.abs((f1(x0+1.e-6)-f1(x0))/(1.e-6)-df1(x0)) < 1.e-4
# BFGS for f1: OK
o = minimize(f1, x0, method='BFGS', jac=df1)
if not o.success:
print('FAILURE', o)
else:
print('SUCCESS min = %f reached at %f' % (f1(o.x[0]), o.x[0]))
# BFGS for f2: failure
o = minimize(f2, x0, method='BFGS', jac=df2)
if not o.success:
print('FAILURE', o)
else:
print('SUCCESS min = %f reached at %f' % (f2(o.x[0]), o.x[0]))
The error I get is
A1 = I - sk[:, numpy.newaxis] * yk[numpy.newaxis, :] * rhok
IndexError: invalid index to scalar variable.
but I doesn't really helps me since it can work with some other starting values.
I am using an all new fresh python install (python 3.5.2, scipy 0.18.1 and numpy 1.11.3).
The solver expects the return value of jacobian df2 to be the same shape as its input x. Even though you passed in a scalar here, it's actually converted into a single element ndarray. Since you used np.sum, your result became scalar and that causes strange things to happen.
Enclose the scalar result of df2 with np.array, and your code should work.
Suppose we have a function defined as:
function(f, df, x0)
where f is a function, df is its derivative, and x0 is an initial point. How do we defined f on the command line? Do you use an inline definition? What about df and x0? What if df is a gradient? Also if x0 is an ordered pair, how do you define it in the command line?
To pass a function as a variable, you need to use a function handle. A simple way to demonstrate this is to use a function handle to an anonymous function. A simple anonymous function can be defined as follows:
handle = #(arglist)anonymous_function
So, to make an anonymous function that adds 2 numbers, you could do something like the following:
f = #(a,b)a+b;
You can use this like any other function
>> f(1,2)
ans =
3
If df is just a simple numeric value, it can be defined as follows:
df = 0.4
To define a pair of values, you could do it like so:
X0=[1 2]
Finally, you can put it all together with this example function (put this in a file called myfunc) . . .
function out = myfunc(f,df,x0)
out = df * f(x0(1), x0(end));
Is this what you want? I was slightly confused by "x0 is an ordered pair".