Minimize a sympy expression - scipy

I have a sympy expression that depends on a variable x, and I want to find the value x for which the expression is minimized. This is my code so far:
import numpy as np
from sympy import *
from scipy.optimize import minimize as scipy_min
x = Symbol('x')
p = Symbol('p')
f = exp(-(x-p)**2/2)/sqrt(2*pi)
func = lambdify([x,p], f)
def func_np(x):
return func(x,2.2)
res = scipy_min(func_np, x, method='Nelder-Mead', tol=1e-6)
However I am getting the error: can't convert expression to float. Can someone help me with this? Thank you!

The second argument in minimize is an initial guess, a number, not a variable. You are trying to pass a sympy.Symbol, which is definitely not a number. It is ok to minimize lambdified function, however, be aware that lambdify is (relatively) slow, so it could be better to print(expression) and create a def manually.
import numpy as np
from sympy import *
from scipy.optimize import minimize as scipy_min
x = Symbol('x')
p = Symbol('p')
f = exp(-(x-p)**2/2)/sqrt(2*pi)
func = lambdify([x,p], f)
def func_np(x):
return func(x,2.2)
res = scipy_min(func_np, 1.0, method='Nelder-Mead', tol=1e-6)
print(res.x)
yields -37.3. However, it is not the solution, because this particular function goes towards 0 when x goes towards ±∞.

Related

Passing Argument to a Generator to build a tf.data.Dataset

I am trying to build a tensorflow dataset from a generator. I have a list of tuples called some_list , where each tuple has an integer and some text.
When I do not pass some_list as an argument to the generator, the code works fine
import tensorflow as tf
import random
import numpy as np
some_list=[(1,'One'),[2,'Two'],[3,'Three'],[4,'Four'],
(5,'Five'),[6,'Six'],[7,'Seven'],[8,'Eight']]
def text_gen1():
random.shuffle(some_list)
size=len(some_list)
i=0
while True:
yield some_list[i][0],some_list[i][1]
i+=1
if i>size:
i=0
random.shuffle(some_list)
#Not passing any argument
tf_dataset1 = tf.data.Dataset.from_generator(text_gen1,output_types=(tf.int32,tf.string),
output_shapes = ((),()))
for count_batch in tf_dataset1.repeat().batch(3).take(2):
print(count_batch)
(<tf.Tensor: shape=(3,), dtype=int32, numpy=array([7, 1, 2])>, <tf.Tensor: shape=(3,), dtype=string, numpy=array([b'Seven', b'One', b'Two'], dtype=object)>)
(<tf.Tensor: shape=(3,), dtype=int32, numpy=array([3, 5, 4])>, <tf.Tensor: shape=(3,), dtype=string, numpy=array([b'Three', b'Five', b'Four'], dtype=object)>)
However, when I try to pass some_list as an argument, the code fails
def text_gen2(file_list):
random.shuffle(file_list)
size=len(file_list)
i=0
while True:
yield file_list[i][0],file_list[i][1]
i+=1
if i>size:
i=0
random.shuffle(file_list)
tf_dataset2 = tf.data.Dataset.from_generator(text_gen2,args=[some_list],output_types=
(tf.int32,tf.string),output_shapes = ((),()))
for count_batch in tf_dataset1.repeat().batch(3).take(2):
print(count_batch)
ValueError: Can't convert Python sequence with mixed types to Tensor.
I noticed , when I try to pass a list of integers as an argument , the code works. However, a list of tuples seems to make it crash. Can someone shed some light on it ?
The problem is what it says is, you cannot have heterogeneous data types (int and str) in the same tf.Tensor. I did a few changes and came up with the code below.
Separate your some_list to two lists using zip(), i.e. int_list and str_list and make your generator function accept two lists.
I don't understand why you're manually shuffling stuff within the generator. You can do it in a cleaner way using tf.data.Dataset.shuffle()
import tensorflow as tf
import random
import numpy as np
some_list=[(1,'One'),[2,'Two'],[3,'Three'],[4,'Four'],
(5,'Five'),[6,'Six'],[7,'Seven'],[8,'Eight']]
def text_gen2(int_list, str_list):
for x, y in zip(int_list, str_list):
yield x, y
tf_dataset2 = tf.data.Dataset.from_generator(
text_gen2,
args=list(zip(*some_list)),
output_types=(tf.int32,tf.string),output_shapes = ((),())
)
i = 0
for count_batch in tf_dataset2.repeat().batch(4).shuffle(buffer_size=6):
print(count_batch)
i += 1
if i > 10: break;

Why am I getting 'isinstance': Cannot determine Numba type?

I am new with Numba. I am trying to accelerate a pretty complicated solver. However, I keep getting an error such as
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Untyped global name 'isinstance': Cannot determine Numba type of <class 'builtin_function_or_method'>
I wrote a small example to reproduce the same error:
import numba
import numpy as np
from numba import types
from numpy import zeros_like, isfinite
from numpy.linalg import solve
from numpy.random import uniform
#numba.njit(parallel=True)
def foo(A_, b_, M1=None, M2=None):
x_ = zeros_like(b_)
r = b_ - A_.dot(x_)
flag = 1
if isinstance(M1, types.NoneType): # Error here
y = r
else:
y = solve(M1, r)
if not isfinite(y).any():
flag = 2
if isinstance(M2, types.NoneType):
z = y
else:
z = solve(M2, y)
if not isfinite(z).any():
flag = 2
return z, flag
N = 10
tmp = np.random.rand(N, N)
A = np.dot(tmp, tmp.T)
x = np.zeros((N, 1), dtype=np.float64)
b = np.vstack([uniform(0.0, 1.0) for i in range(N)])
X_1, info = foo(A, b)
Also if I change the decorator to generated_jit() I get the following error:
r = b_ - A_.dot(x_)
AttributeError: 'Array' object has no attribute 'dot'
Numba compiles the function and requires every variables to be statically typed. This means that each variable has only one unique type: one variable cannot be of both the type NoneType and something else as opposed to with CPython based on dynamic typing. Dynamic typing is also a major source of the slowdown of CPython. Thus, using isinstance in nopython JITed Numba functions does not make much sense. In fact, this built-in function is not supported.
That being said, Numba supports optional arguments by specifying optional(ArgumentType) in the signature (note that the resulting type of the variable is optional(ArgumentType) and not ArgumentType nor NoneType. You can then test if the argument is set using if yourArgument is None:. I do not know what is the type of M1 and M2 in your code but they need to be explicitly defined in the signature with optional argument.

How to use a sympy generated Jacobi matrix in the solution of an ODE system ?

I have a first order ODE system which is composed of 3 diff. eqn's. I want to to solve it with scipy.integrate.solve_ivp's BDF method. So I need to calculate jacobi matrix of system (and made it with the help of SymPy).
If i didn't misunderstand; according to the scipy.integrate.solve_ivp document, you must introduce jacobien matrix in the form of jac(t,u) where u should be the state variables of your ODE system. To this end i lambdify jacobien matrix properly.
And my problem arises here. Although I am able to calculate jac(t,u) with some (t,u) such as ((1/800),(150,1E-6,3)), I couldn't send array arguments to my jac. when i introduce jac(t,u) as an argument to solve_ivp it gives an error message. So how should i introduce jac matrix ? Or is my lambdify not proper ?
This is my code. Any help i appreciate it.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
def cvs(t,u):
u1,u2,u3 = u
def Qmi(t):
return t**2
u1p = Qmi(t)*u3
u2p = (u1**2)*np.cos(2*np.pi*200*t)
u3p = (np.sin(2*np.pi*t))*u2**-1
return [u1p,u2p,u3p]
def jac_func():
######### DEFINE THE ODE SYSTEM #########
import sympy
sympy.init_printing()
t = sympy.symbols("t")
Q_mi = sympy.Function("Q_mi")(t)
u1 = sympy.Function("u1")(t)
u2 = sympy.Function("u2")(t)
u3 = sympy.Function("u3")(t)
Q_mi = t**2
u1p = (u3*Q_mi)
u2p = (u1**2)*sympy.cos(2*sympy.pi*200*t)
u3p = sympy.sin(2*sympy.pi*5*t)*u2**-1
####### CALCULATE THE JACOBIEN ########
ode_rhs = sympy.Matrix([u1p,u2p,u3p])
ode_var = sympy.Matrix([u1,u2,u3])
jac = sympy.Matrix([[ode.diff(var) for var in ode_var]for ode in ode_rhs])
u = (u1,u2,u3)
jac_np = sympy.lambdify((t,u),jac,"numpy")
return jac_np
jac_np = jac_func()
U_0 = [500,20,20]
t = np.linspace(0,100,10000)
solf = solve_ivp(cvs,(0,100),y0=U_0,method = 'BDF',jac=jac_np(t,U_0),t_eval=t)
error message:
ValueError Traceback (most recent call last)
<ipython-input-1-8b86ffb3a7cf> in <module>()
41 t = np.linspace(0,100,10000)
42
---> 43 solf = solve_ivp(cvs,(0,100),y0=U_0,method = 'BDF',jac=jac_np(t,U_0),t_eval=t)
<lambdifygenerated-1> in _lambdifygenerated(t, _Dummy_188)
1 def _lambdifygenerated(t, _Dummy_188):
2 [_Dummy_185, _Dummy_186, _Dummy_187] = _Dummy_188
----> 3 return (array([[0, 0, t**2], [2*_Dummy_185*cos(400*pi*t), 0, 0], [0, -sin(10*pi*t)/_Dummy_186**2, 0]]))
ValueError: setting an array element with a sequence.
You are getting the problem because you do what the error message says, you are passing an array where the procedure expects a single number. In
solf = solve_ivp(cvs,(0,100),y0=U_0,method = 'BDF',jac=jac_np(t,U_0),t_eval=t)
you are trying the constant matrix jac_np(t,U_0) to the Jacobian argument. However, at that point t contains all the t values that you want output samples from. A list of [ array, scalar, scalar ] is not compatible with the numpy arrays.
Long story short, remove the arguments, pass the Jacobian as callable function, as you quite probably intended,
solf = solve_ivp(cvs,(0,100),y0=U_0,method = 'BDF',jac=jac_np, t_eval=t)

How to write a flexible multiple exponential fit

I'd like to write a more or less universial fit function for general function
$f_i = \sum_i a_i exp(-t/tau_i)$
for some data I have.
Below is an example code for a biexponential function but I would like to be able to fit a monoexponential or a triexponential function with the smallest code adaptions possible.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
t = np.linspace(0, 10, 100)
a_1 = 1
a_2 = 1
tau_1 = 5
tau_2 = 1
data = 1*np.exp(-t/5) + 1*np.exp(-t/1)
data += 0.2 * np.random.normal(size=t.size)
def func(t, a_1, tau_1, a_2, tau_2): # plus more exponential functions
return a_1*np.exp(-t/tau_1)+a_2*np.exp(-t/tau_2)
popt, pcov = curve_fit(func, t, data)
print(popt)
plt.plot(t, data, label="data")
plt.plot(t, func(t, *popt), label="fit")
plt.legend()
plt.show()
In principle I thought of redefining the function to a general form
def func(t, a, tau): # with a and tau as a list
tmp = 0
tmp += a[i]*np.exp(-t/tau[i])
return tmp
and passing the arguments to curve_fit in the form of lists or tuples. However I get a TypeError as shown below.
TypeError: func() takes 4 positional arguments but 7 were given
Is there anyway to rewrite the code that you can only by the input parameters of curve_fit "determine" the degree of the multiexponential function? So that passing
a = (1)
results in a monoexponential function whereas passing
a = (1, 2, 3)
results in a triexponential function?
Regards
Yes, that can be done easily with np.broadcasting:
def func(t, a, taus): # plus more exponential functions
a=np.array(a)[:,None]
taus=np.array(taus)[:,None]
return (a*np.exp(-t/taus)).sum(axis=0)
func accepts 2 lists, converts them into 2-dim np.array, computes a matrix with all the exponentials and then sums it up. Example:
t=np.arange(100).astype(float)
out=func(t,[1,2],[0.3,4])
plt.plot(out)
Keep in mind a and taus must be the same length, so sanitize your inputs as you see fit. Or you could also directly pass np.arrays instead of lists.

scipy.optimize failure with a "vectorized" implementation

I have an optimization problem (1d) coded in 2 ways - one using a for loop and an other using numpy arrays. The for loop version works fine but the numpy one fails.
Actually it is a bit more complicated, it can work with different starting points (!!) or if I choose an other optimization algo like CG.
The 2 versions (functions and gradients) are giving the same results and the returned types are also the same as far as I can tell.
Here is my example, what am I missing?
import numpy as np
from scipy.optimize import minimize
# local params
v1 = np.array([1., 1.])
v2 = np.array([1., 2.])
# local functions
def f1(x):
s = 0
for i in range(len(v1)):
s += (v1[i]*x-v2[i])**2
return 0.5*s/len(v1)
def df1(x):
g = 0
for i in range(len(v1)):
g += v1[i]*(v1[i]*x-v2[i])
return g/len(v1)
def f2(x):
return 0.5*np.sum((v1*x-v2)**2)/len(v1)
def df2(x):
return np.sum(v1*(v1*x-v2))/len(v1)
x0 = 10. # x0 = 2 works
# tests...
assert np.abs(f1(x0)-f2(x0)) < 1.e-6 and np.abs(df1(x0)-df2(x0)) < 1.e-6 \
and np.abs((f1(x0+1.e-6)-f1(x0))/(1.e-6)-df1(x0)) < 1.e-4
# BFGS for f1: OK
o = minimize(f1, x0, method='BFGS', jac=df1)
if not o.success:
print('FAILURE', o)
else:
print('SUCCESS min = %f reached at %f' % (f1(o.x[0]), o.x[0]))
# BFGS for f2: failure
o = minimize(f2, x0, method='BFGS', jac=df2)
if not o.success:
print('FAILURE', o)
else:
print('SUCCESS min = %f reached at %f' % (f2(o.x[0]), o.x[0]))
The error I get is
A1 = I - sk[:, numpy.newaxis] * yk[numpy.newaxis, :] * rhok
IndexError: invalid index to scalar variable.
but I doesn't really helps me since it can work with some other starting values.
I am using an all new fresh python install (python 3.5.2, scipy 0.18.1 and numpy 1.11.3).
The solver expects the return value of jacobian df2 to be the same shape as its input x. Even though you passed in a scalar here, it's actually converted into a single element ndarray. Since you used np.sum, your result became scalar and that causes strange things to happen.
Enclose the scalar result of df2 with np.array, and your code should work.